The global rise of artificial intelligence holds both promise and danger, and Africa needs its own experts to balance the two, Eunice Kilonzo reports.
“I get smarter the more you tell me,” says Ada. “So I am going to ask a few questions that will help me help you. What’s your name?”
Eunice.
“Are you pregnant?”
No.
“Are you a smoker?”
No.
“Have you been diagnosed with high blood pressure?”
Ada sounds and probes like a counsellor, doctor, or a friend. But she is actually a mobile application that uses artificial intelligence (AI) to track symptoms to get to the probable cause of an ailment.
Ada provides guidance in eight languages—including, since last year, Kiswahili—and, depending on my responses, advises me on what to do next, such as whether to visit a health facility for a conclusive diagnosis and treatment. The app, developed by Ada Health, a Germany-based health tech company, combines a medical knowledge base* with intelligent reasoning technology. It is a taste of healthcare in the digital age: an era in which machines are simulating human tasks, thoughts, and actions.
Ada’s assistance in Kiswahili is the first to target an African region. “AI has huge potential to help improve the efficiency and quality of our healthcare systems through more personalised and predictive care,” says Stefan Germann, CEO of Fondation Botnar, a Switzerland-based philanthropic organisation that helped to develop Ada. “We believe it is vital that these benefits are available to everyone, globally.”
Africa’s AI scene is cooking. In Kenya, the chatbot Sophie Bot helps teenagers manage their sexual and reproductive health. In Nigeria, agri-tech company Zenvus fuses electronics and analytics to empower farmers. In Morocco, AI-powered drones are used to track environmental crimes. And the rest of the world wants a piece: in 2018, Google opened its first African AI research hub in Ghana’s capital Accra, IBM has research centres in South Africa and Kenya, and tech hubs are springing up in cities like Cape Town, Addis Ababa, Kigali, and Nairobi.
Alfred Ongere, the founder of Ai Kenya, a 3,900**-member community of AI enthusiasts in East Africa, can see the attraction Africa holds for international companies. The continent’s high smartphone penetration is yielding a treasure trove of data that can be turned into products and insights, he says. But as Africa’s AI scene begins to flourish, there are pitfalls that the continent needs to be aware of, ranging from biased algorithms that don’t work well for people of African descent to the technology’s ability to sow division. For this reason, AI developers in Africa need to ensure that their applications foster equality and fairness, says Ongere. “Releasing products that don’t work properly would lead to disastrous effects,” he says.
A question of fairness
Financial consultancy PwC says AI could contribute up to US$15.7 trillion to the global economy by 2030. But the technology is not inherently benign. While it can be a tool to achieve transformation, it also has the potential to reinforce structural inequalities and biases, and to perpetuate gender and racial imbalances. One of the problems with AI is that its ‘intelligence’ depends entirely on the data from which it learns. That data could be biased for a variety of reasons—for instance, most medicines have been tested on white men, meaning there is less medical data on women or people of colour.
AI algorithms trained on biased data inherit that bias. For instance, gender biases mean that loan agreement data from 30 to 40 years ago suggest that more men get such loans than women, says Vukosi Marivate, who holds a chair in data science at the University of Pretoria in South Africa. “If we were to use that data—without trying to correct it—to build an automated system to grant loans, it would be biased,” he says.
There are examples of AI discriminating against black people from other parts of the world. When the head of Facebook’s AI unit, Lade Obamehinti, tested its Portal Smart Camera (which uses algorithms to identify multiple subjects during video calls), the camera kept focusing on her male colleagues—not on her. Upon examining the datasets used for training the camera’s AI, she found an uneven representation for skin tone and gender.
In another example, a viral 45-seconds video shared in 2017 showed an automatic no-touch soap dispenser that appeared to dish out soap to a white person’s hand but not a black person’s. The light sensors in the dispenser had not been trained to recognise darker skin.
Marivate says he doesn’t know about similar examples from Africa, but that doesn’t mean they have not happened or will not occur. Because we usually trust machines to be right—how many of us have followed our GPS even though we suspected it might be leading us astray?—we might not immediately see the problems, he says. One area of concern, he adds, is the rise of AI-driven closed-circuit TV camera applications for security on the continent. If the algorithms driving these applications don’t recognise black faces properly, people might be wrongly profiled. “Do we understand the shortcomings of the facial recognition systems in these AI deployments?” he asks.
Avoiding nefarious uses
There are other dangers that come with the growing influence of AI. The flood of personal data coming out of social media applications like Facebook, combined with ever-smarter algorithms, has the potential to stoke political tensions, and can even turn the tides of government elections. In 2018, British consulting firm Cambridge Analytica was outed for having used personal data from Facebook users—ostensibly collected for research purposes—to build an algorithm that influenced voting patterns in the 2016 US general election.
Developing countries, with their “overstretched and unconsolidated democracies”, are particularly at risk of such nefarious uses, says Clayton Besaw, a research associate at the United States-based One Earth Future foundation, which promotes peace through good governance. He says African countries should introduce regulation to make sure the technology is not abused. “Obviously each country has its own complexities and nuance,” says Besaw. But some, he says, like Kenya or Nigeria, may want to be mindful of politicians or non-state actors who want to use technology to stoke sectarian or ideological flames, promote political violence, or sow distrust in the democratic system.
But, even then, such regulation may not stop states themselves from using the technology against their people. Besaw points to the deal that the government of Zimbabwe struck in March 2018 with Chinese tech firm Cloudwalk to import facial recognition technology, as a case in point. He says such deals are examples of top-down development of AI applications “mainly situated around social control and repression”. Ultimately, he says, African countries will have to find a balance between regulation and freedom that mitigates abuse, while not restricting non-state actors who want to develop beneficial uses.
Building capacity in Africa
Apart from regulating the sector, another way that African nations can protect themselves against bad AI applications is through training. Hila Azadzoy, who heads up global health at Ada Health, says that one of the solutions is to make sure that the teams working on AI applications are diverse in terms of gender, ethnicity, training, and background. This will increase the likelihood of unconscious biases being recognised and addressed, she says. The Swahili app was developed in partnership with the Muhimbili University of Health and Allied Sciences in Tanzania. “We have also expanded our medical content team to ensure representation of physicians from around the world, and invested in additional language skills, ensuring that we have qualified doctors who are also native speakers of our target languages to work on our medical content,” says Azadoy.
However, in order to contribute to the desired diversity, Africa needs its own AI experts. Both Marivate and Ongere are training young Africans in AI and raising awareness about both the opportunities and the challenges the technology brings. Ai Kenya produces an AI podcast and offers thought leadership in AI ethics in Kenya and beyond. And Marivate is a co-founder of the Deep Learning Indaba, an organisation that runs continental meetings and hands out awards, with the aim of making Africans shapers of AI advances rather than just observers and receivers of foreign technology.
Ongere, in Kenya, says it is “a big myth” that Africans are not doing AI, and that it always has to be imported. However, Africa remains under-resourced in terms of advanced research, he admits. Still, he says, global tech giants need to collaborate–on an equal footing–with the local communities to build AI-based solutions.
In South Africa, Marivate agrees. Research on AI is vital if Africans want a voice in the development of the new technology, he says. “We cannot have AI research on Africa that has no Africans participating.”
BOX: Ready, Steady, Go AI
Countries in the Global South risk being left behind as they struggle to take advantage of AI solutions, according to a 2019 report. The Government AI Readiness Index, compiled by consultancy Oxford Insights and the Canada-based International Development Research Centre, gauged governments in 194 countries around the world. Singapore topped the index, followed by Western European governments. Canada, Australia, New Zealand and four other Asian countries made up the top 20. There were no African countries in the Top 50, and only 12 in the Top 100, out of a total of 54 African countries included in the analysis. The top five countries in Africa are Kenya, Tunisia, Mauritius, South Africa, and Ghana—in that order.
* Correction: The original article stated that the app “combines a database for 160 different diseases with intelligent reasoning technology.” That figure is the number of diseases that Ada has optimised for the Swahili version, conditions that are more prevalent in the region. The actual number of conditions modelled in the database is in the thousands, and increases.
** Correction: This number was changed from the original story which said 2,500 members.
Image by Gerd Altmann from Pixabay