Issue BriefsPublished on Sep 09, 2019 PDF Download
ballistic missiles,Defense,Doctrine,North Korea,Nuclear,PLA,SLBM,Submarines

Artificial Intelligence in Africa’s healthcare: Ethical considerations


This paper is for ORF’s Centre for New Economic Diplomacy (CNED). Other CNED research can be found here.

Attribution: Laura Sallstrom, Olive Morris and Halak Mehta, “Artificial Intelligence in Africa’s Healthcare: Ethical Considerations”, ORF Issue Brief No. 312, September 2019, Observer Research Foundation.

Artificial intelligence (AI) can improve various aspects of healthcare. It can help reduce annual expenditure,[1] allow early detection of diseases, provide round-the-clock monitoring for chronic disorders, and help limit the exposure of healthcare professionals in contagious environments. The use of AI in healthcare systems in Africa, in particular, can eliminate inefficiencies such as misdiagnosis, shortage in healthcare workers, and wait and recovery time. However, it is important to safeguard against issues such as privacy breaches, or lack of personalised care and accessibility. The central tenet for an AI framework must be ethics. This brief discusses the benefits and challenges of introducing AI in Africa’s healthcare sector and suggests how policymakers can strike a balance between allowing innovation and protecting data.

AI in the healthcare sector

Globally, the most critical issue in healthcare is providing overarching and effective treatment options that improve standards of living. The World Health Organization (WHO) has developed a five-year strategic plan for reaching public-health targets, as outlined in the Sustainable Development Goals (SDGs). In 2019, the WHO introduced the “triple billion” targets for global health, i.e. universal healthcare, health emergency protections, and overall better health outcomes for one billion people across the world.[2] AI-centric solutions can help achieve these goals by increasing access, improving quality and reducing costs.

Developments in AI will drastically improve health services, diagnostics and personalised medicine. Various initiatives are already employing basic technology applications to provide essential healthcare services, for example, to expectant and nursing mothers. These are particularly relevant in the context of African healthcare, where the technology currently being used can easily incorporate AI-based solutions. For example, Safermom is a Nigerian start-up that empowers pregnant women and new mothers to make informed decisions by using low-cost mobile technologies (two-way SMS, voice calling, and mobile apps) to transmit vital health information.[3]

In addition to improving direct patient care, AI can maximise supply-chain efficiencies, reduce administrative tasks, and streamline and improve life-saving compliance measures. It can also generate new capabilities for safeguarding against public-health epidemics that plague the most vulnerable populations, e.g. the containment of dengue fever or the prediction of birth asphyxia using a mobile phone.[4] Moreover, real-time access to maternal newborn healthcare data can be used to swiftly identify and respond to childhood diseases, malnutrition or related challenges.

Despite countless benefits, however, the application of AI is vulnerable to pitfalls. The current AI-powered health systems suffer from an absence of accurate datasets and the uneven management of sensitive health data. To be sure, the most significant ethical violations are not rooted in malicious intent, but in a lack of awareness of appropriate AI practices and safeguards. Stakeholders, including government and international organisations, are attempting to incorporate and implement safeguarding measures, with ethics as a central tenet of the AI framework.

Increasing access

Using data from the United States (US) as a reference, it is known that cancelled appointments can be costly to doctors. A 2013 cross-university study estimated that the cost of no-show appointments per doctor in the US was US$725.42 per day.[5] This is calculated based on an average daily patient count of 24 patients per doctor, with an 18 percent baseline no-show rate. Thus, individual practices suffer an annual loss of over US$182,000.

A direct relationship between this data and Africa is perhaps most appropriate with respect to cities in the continent. In addition, however, it can be applicable to Africa’s rural areas. These regions suffer from physician shortages, with patients frequently unable to see highly in-demand doctors. If transportation is expensive or difficult, for in-demand rural doctors, the issues will not only be financial but loss of human life.

AI solutions are making a headway in addressing these challenges. For example, the Nigerian company DokiLink helps patients book doctor’s appointments by creating personal calendars for doctors and their aides. The platform also provides means for doctors to collaborate and exchange information concerning medical questions. The founder, Dr. Niyi Osamiluyi, says that AI will “help expand the capacity and capabilities of healthcare providers, especially in the areas of radiology and pathology.”[6]

Medical appointments can often be time-consuming, inconvenient and physically demanding for patients. Geography and economic restraints can limit access for both rural and urban residents, e.g. patients living in remote mountainous regions or those living in urban areas with little access to transportation. In cases where specialised medical professionals and equipment are required, AI-based telemedicine technology can bridge borders, overcome language barriers and address economic constraints. AI has the potential to offer patients unfettered access to specialists around the globe and allow for unprecedented coordination between professionals. With an increase in mobile-phone-based applications, emerging markets can also benefit, notwithstanding the network and infrastructure challenges. For example, Novartis has partnered with Vodacom South Africa to connect community health workers to doctors through mobile technology.[7]

Improving quality

AI is becoming increasingly instrumental for the early detection of diseases, which allows for more accurate diagnoses, reducing instances of misdiagnosis and the resulting health and cost burden to patients. Data-sharing amongst health professionals provides doctors with myriad case studies to inform diagnoses and allow for in-depth analyses of previous studies. This can give physicians a foundational understanding of many illnesses, even without significant prior exposure. In addition to enhanced diagnostic procedures, AI-enabled technologies can provide superior treatment options.

Improving the quality of healthcare systems is beneficial for not only patients but also physicians, nurses and ancillary professionals. According to Athenahealth,[8] physicians spend an average of “40 percent of their time processing thousands of administrative documents and forms and chasing down hundreds of missing lab and imaging orders.”[9] Automation of processes such as cataloging charts, filling prescriptions and transcription services can ease the burden placed on medical professionals and yield positive externalities for patients.

In the context of unstructured medical data, however, it is necessary to have sophisticated natural language processing algorithms. Since much of the technology is currently being developed in Western or Asian contexts, transferring them to the African markets may prove challenging. The technology must be adaptable to the local language, to allow for modifications based on different languages, language structures and even speech accents. Moreover, the medical situations themselves may be vastly different, e.g. different types of diseases and health-management systems.

While the uptake of any new system is a challenge that requires various incentive structures, African countries have demonstrable interest. Dr. Osamiluyi says that the use of AI in medical cases in Nigeria will “help to alleviate the lack of human resources for health, poverty and epidemiological transition of disease burden. It will help in primary care by making patient diagnosis faster and more accurate.”[10]

Reducing costs

According to Deloitte’s 2019 Global Healthcare Outlook, many public health systems across the globe are still financially unable to address “accessibility (imbalanced distribution, including a rural-urban divide), affordability (especially for patients with low economic status), awareness (of lifestyle diseases, risk factors, vaccinations), absent or inadequate infrastructure and skilled human resources.”[11] Global spending is slated to grow at an annual rate of 5.4 percent until 2022, a notable increase from the 2.9 percent in the last five years.[12] Similarly, government healthcare spending is increasing at an average annual rate of 6.7 percent in West Africa and 4.5 percent in Southern Africa. Despite these efforts, the 2017 report by the World Bank and WHO indicates that half of the world’s population do not have access to essential health services, with health expenses causing 100 million people to live in extreme poverty.[13] The integration of AI into the healthcare space can help check the rising medical diagnosis costs, making treatments more affordable.

Nigeria has developed a system called Apmis, which allows healthcare data to be shared and exchanged easily by hospital owners, healthcare professionals, caregivers, patients and other stakeholders. It allows for easy, transparent, secure and low-cost data-sharing.[14] Another successful case is Kenya Medical Supplies Agency and IBM’s Watson, which are engaged in a pilot project to transform healthcare supply chains. Users can interact with AI through various platforms, including SMS, computer and voiceover data, to improve healthcare logistics such as communication, sending medical records, and appointment updates.[15]

AI-assisted technologies are expected to save the global healthcare industry approximately US$150 billion a year by 2026.[16] Accenture estimates that the top costs savers will come from “robot assisted surgery (US$40 billion), virtual nursing assistants (US$20 billion), administrative workflow assistance (US$18 billion), fraud detection (US$17 billion), dosage error reduction (US$16 billion), connected machines (US$14 billion)” and similar tools.[17] By 2021, the market for AI in healthcare is expected to reach US$6.6 billion, with an annual growth rate of 40 percent.[18] The telemedicine market for virtual appointments is expected to become a US$1.49-billion industry by 2025, an annual growth of nearly 20 percent.[19] Thus, the role of AI is crucial in reducing healthcare costs, fostering innovation and creating positive economic output.


Access to data

For AI to function properly, it needs massive amounts of data. If the data is flawed or biased, to begin with, the result will also be flawed. Thus, the collection and monitoring of the training datasets that go into an AI algorithm are major challenges in AI use in healthcare. For the integration of ethics in AI, it is crucial to ensure the collection of unbiased, accurate data.

In April 2019, the AI Now Institute published a study on gender, race and power in AI, calling out the lack of diversity in AI workplaces and the bias in technologies.[20] Public and private organisations alike are now taking notice of this problem. For example, facial recognition has become a hugely controversial technology because it tends to provide lower standards of recognition for African faces compared to Caucasian ones. This is likely the result of systems using training datasets that are primarily composed of Caucasian faces.

Medical research can also produce false results if it fails to capture the nature of the whole of a population being treated. For example, symptoms of heart attacks in women present differently than in men. Current medical research datasets tend to focus on men. As a result, existing datasets on heart attacks, for instance, may not be as accurate for women. Similarly, health issues may change significantly across nations and ethnicities. Such variance has been observed in genetic disorders or genetic predispositions (e.g. diabetes is more prevalent in African American communities than in the US), disease prevention (e.g. a European disease prevention programme may not prioritise water-borne illnesses), and medical infrastructure (e.g. the AI system anticipates a state-of-the-art operating room rather than a basic medical facility in rural Lesotho.)

Throughout history, key thought leaders have commented on the development of new forms of technological systems. Harvard Professor Jonathan Zittrain articulated how the generative internet and such systems are facilitating new kinds of control. Timnit Gebru, cofounder of “Black in AI,” has similarly discussed the diversity crisis in AI systems. Harvard Professor Cass Sunstein has written about how social technologies impact governance and society as well as how AI algorithms can be used to overcome the pitfalls of cognitive biases.

For AI to work in the African healthcare sector, native researchers must be involved in the development of new technology, with African datasets informing such development. Any import of foreign AI technology must be done with awareness of its development process and limitations. Policymakers should have transparency into the algorithms and some understanding of the data supporting them. African datasets should be made available to researchers and companies working with imported AI tech, to ensure locally applicable outcomes. Since AI can only deliver what it has learnt, human engagement is necessary to ensure that the learning is unbiased and holistic. For policymakers, this means striking a balance between data access and personal privacy, and ensuring that African data is incorporated in AI development.

Protecting sensitive data

Personal health data—genetic information, biometric indicators such as fingerprints, a person’s HIV status—is often assigned the highest level of regulatory privacy protections. Data privacy and security are key to the implementation of AI-based medical technologies, both for compliance purposes and public trust in these solutions. The breach of sensitive data can pose a serious threat to public safety, and the efficient and accurate treatment of patients.

Any company leveraging AI techniques in healthcare must be particularly attuned to the data-regulation norms and the management of sensitive patient data, to avoid legal and ethical impropriety. Current data regulation standards for sensitive healthcare vary widely across regions. The European Union (EU), for instance, passed the comprehensive data-protection law, General Data Protection Regulation (GDPR), in 2018; the US’ Health Insurance Portability and Accountability Act (HIPAA) handles treatment specifically for medical information. Laws protecting personal data are being developed across Africa, including in Kenya, Morocco, Nigeria and South Africa. While there is still a long way to go, some best practices can minimise the vulnerability of sensitive data, e.g. the anonymisation of all datasets used in algorithms, distributed ledgers, multifaceted cyber-security systems, encryption during storage and transmission, proper destruction of identifying information, data facility security, and targeted investment in IT infrastructure.


Accountability mechanisms when managing healthcare information can promote integrity and durability in AI systems. By approaching AI systems with measured diffidence, a company can implement checks on the AI algorithm to reduce biases and promote holistic analyses. A study by the biopharmaceuticals company Syneos Health found one of the primary public concerns facing AI systems to be the “lack of human oversight and the potential for machine errors leading to mismanagement of their health.”[21]

The following are some examples of current ethical checks. These can highlight best practices for accountability:

  1. Cross-Sector Research Efforts: In 2018, the EU-backed EU High-Level Expert Group on Artificial Intelligence, which included industry professionals, non-governmental agencies and scholars, released a guidance note concerning AI Ethics. The AI-HLEG is a European Commission-backed working group comprising representatives from industry, academia and NGOs.[22]
  2. Industry-Led Ethical Principles: Entrepreneurs and major technology companies, such as Elon Musk, Peter Thiel, Sam Altman, Infosys, Microsoft and Amazon, have created a joint non-profit AI-research company called Open AI.[23]
  3. Multilateral Cooperation: The International Telecommunication Union (ITU) and the WHO have partnered to create a Focus Group on AI, aiming to establish standards and guidelines for AI-based methods in the healthcare sector. On 4 May 2020, the ITU will hold their fourth annual workshop on AI for Good Global Summit, connecting innovators with problem owners for sustainable development. The ITU is an Africa-friendly forum, which may present opportunities for African AI researchers and innovators, as well as related healthcare experts.[24]

In recent years, there has been increased global endeavour to establish basic principles of the ethical use of AI and accountability. However, current regulatory approaches to this field of technology remain mostly in the philosophical realm. Since this aspect of technology development is without precedent, there is little basis for formulating regulations. Moreover, the need to strike a careful balance between allowing technology growth and ensuring accountability, renders inadequate most of the current proposals.

Crafting a policy

The GDPR brought many issues regarding data regulation to the forefront: how to ensure data privacy for individuals; the role of government in the regulation of technology; and the best ways to effectively and ethically leverage big data. AI systems have been subject to sector-specific laws or subject-specific guidelines on a haphazard and piecemeal basis—such as data-protection acts, cyber-security laws, anti-discrimination regulations—creating large regulatory gaps. However, fuelled by concerns regarding the ethical implications of AI usage, countries are now beginning to explore AI-specific guidelines and regulations.

With the increased use of AI to perform tasks, analyse data and create new systems, regulations have begun to emerge. The public sector is catching up to these discussions of control, monitoring and bias. Nations are participating in the rapid development of AI and its best practices. The most significant geographies, in this regard, include the EU, the US, Singapore and Dubai; there are also some key international organisations.


The EU released the “Ethics Guidelines for Trustworthy AI” in April 2019, through a high-level expert group on AI. The guidelines state key principles that AI systems should abide by, including respect for human autonomy, prevention of harm, fairness and explicability. For holding up these principles, it recommends seven key requirements, such as human agency, oversight, privacy, data governance, diversity, non-discrimination and fairness. The guidelines appropriately link the ethical discussion to the broader discussion surrounding data protection and privacy.


Under President Donald Trump’s Executive Order on AI, the National Institute of Standards and Technology has been tasked with creating a plan regarding federal standards for deploying AI technologies, produced on 10 August 2019. NIST solicited public comments to formulate the plan. The guide is focused on minimising vulnerability to attacks from bad actors, encouraging innovation, and promoting public confidence in AI.


Singapore has released a framework on how AI can be ethically and responsibly used. It is intended to be a living document, evolving new perspectives and challenges emerge. Certain articles in the framework suggest a nuanced understanding of the challenges with aggregating data and preserving human autonomy. Article 3.6, for instance, acknowledges that individuals live in their unique societal contexts and recommends that organisations operating in multiple countries consider differences in societal norms and values. Article 3.7 states that some risks to individuals may only manifest at the group level.

However, there should be caution against adopting the stance that ethics and norms are overly subjective. The document allows corporations to decide their own ethics, in turn creating scope for the kind of subjectivity that will fail to establish ethical norms in AI.


The Smart Dubai Office created the new Ethical AI Toolkit, which provides counsel to individuals and organisations offering AI services. Notably, the toolkit recommends making AI systems explainable, attempting to eliminate the “black box” issue surrounding them. It also suggests carefully examining whether the decision-making processes introduce bias. While the document offers some good high-level guidelines, it falls short on describing how such values should be implemented.

Multilateral organisations

International organisations are eager to include AI-related topics in their agendas, work-plans and research topics. Around mid-2020, the Organisation of Economic Cooperation and Development (OECD) is preparing to release AI guidelines, at a ministerial meeting and the International Telecommunications Union (ITU) is set to hold its third high-level “AI for Global Good Summit.” Other organisations active in this space include the World Intellectual Property Organisation, which recently issued a major report on AI; and the WHO, which is not only issuing its own report but also collaborating with the ITU to form a “focus group on the topic; and the International Labour Organisation, which has a workstream on the “future of work.” While these discussions can seem detached from the national regulatory processes, they promise to be powerful platforms that shape norms and set acceptable parameters for national policymakers looking for guidance.


Technology continues to evolve at a rapid pace, and AI-related progression will only accelerate that change. To successfully integrate AI into the healthcare industry, governments must aim to create regulations that promote ethics by design, i.e. including checks and balances into the systems that utilise AI. It is also important to define what these systems should contain and how the development process is structured. Moreover, issues of human agency and bias must be considered while creating the algorithms.

As with all cost-benefit analysis, however, regulators must weigh the impact of regulation against the stifling of innovation. This brief recommends keeping in mind three key concepts:

  1. Not all AI is the same. There are very different kinds of applications and uses for AI in healthcare. Regulations must take these differences into account and not impose a blanket prohibition on AI use. Further, issues of ethics, trust and fairness must be addressed in conjunction with existing protections, such as consumer protection, consumer rights and data protection.
  2. Policies must be well informed and align with the needs and values of the cultures represented in each African nation, as well as provide a holistic vision for a better future. Thus, AI regulation overlaps with areas such as data privacy, big tech and data regulation, consumer rights, ethics, social justice and law. The government must work with technology companies, researchers and academia, and civil society groups, putting aside differences in perceptions of ethical standards and social justice. This is the only way to ensure that they arrive at the most effective ways to regulate this sector.
  3. AI allows for the creation of solutions in a new way that can mask the underlying logic. Underrepresented and historically marginalised communities are already victimised by existing systems that reinforce their positions in society. Therefore, all stages and elements of AI applications—training data, algorithms, effective performance—must be carefully examined to ensure fairness towards such groups.

Governments have struggled to develop cyber-security and data-privacy norms that foster both security and growth. Today they face the additional challenge of developing norms that also address the ethical use of AI. However, governments cannot formulate regulations regarding the emerging uses of AI based on vague ethical ideas, since that will not only be ineffective but could also be detrimental to the innovative process.

The imperative, therefore, is not policymaking but educating companies about their obligations and the society at large about the potential utility and risks of AI in healthcare. The governments of Africa should focus on “ethics by design” and be forward-looking in technology implementation and use. These measures can help create effective long-term regulations while also allowing for innovative AI solutions for healthcare in African countries.

About the authors

Laura Sallstrom is an owner and member of the board of Access Partnership, a public policy firm that provides market access for technology.

Olive Morris is Policy Analyst at the New Center.

Halak Mehta is Sr. Manager of Data & Trust, Access Partnership


[1] Jeff Lagasse, “Precision medicine has potential to reduce wasteful ineffective treatments, study says,” Healthcare Finance, 2018.

[2] World Health Organisation, Director-General brings ambitious agenda for change to World Health Assembly (Geneva: World Health Organisation, 2018).

[3] Safermom, Improving maternal and child health care (Nigeria, 2018).

[4] Brian Wahl et al., Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? (BMJ Global Health, 2018).

[5] Bjorn Berg et al., Estimating the cost of no-shows and evaluating the effects of mitigation strategies (Medical Decision Making, 2013), 976-985.

[6] Technopreneur, the Business Minded Techie, (Nigeria, October 18, 2018) Nigerian health experts turn to Artificial Intelligence, mHealth, eHealth.

[7] Technopreneur, Nigerian health experts turn to Artificial Intelligence, mHealth, eHealth, (Abuja:, 2018).

[8] Jonathan Bush and John Fox, Bringing the Power of Platforms to Health Care (Boston: Harvard Business Publishing, 2017).

[9] Florien Leibert, AI will improve healthcare and cut costs – if we get these 4 things right (Cologny: World Economic Forum, 2018).

[10] Technopreneur, the Business Minded Techie, (Nigeria, October 18, 2018) Nigerian health experts turn to Artificial Intelligence, mHealth, eHealth.

[11] Deloitte, 2019 Global health care outlook (New York: Deloitte Touche Tohmatsu Limited, 2019).

[12] The Economic Intelligence Unit, World Industry Outlook, Healthcare and Pharmaceuticals (Westminster: The Economist, 2018).

[13] World Health Organisation, World Bank and WHO: Half the world lacks access to essential health services, 100 million still pushed into extreme poverty because of health expenses (Tokyo: World Health Organisation, 2017).

[14] Apmis, All Purpose Medical Information System, (Nigeria: 2018).

[15] Medical Technology News South Africa, Solving Africa’s healthcare logistics problems with AI (Cape Town: BizCommunity, 2018).

[16] Accenture, Artificial Intelligence: Healthcare’s New Nervous System (Dublin: Accenture, 2017), 1.

[17] Ibid, 3.

[18] Ibid, 2.

[19] Accuracy Research LLP, Virtual Patient Simulation Market Analysis and Trends- Technology, Product – Forecast to 2025 (Dublin: Accuracy Research LLP, 2016).

[20] Sarah Myers West et al., Discriminating Systems Gender, Race, and Power in AI (New York: AI Now Institute, 2019).

[21] Syneos Health, When It Comes to Artificial Intelligence in Healthcare, Patients Fear the Replacement of Doctors, Yet Are Open to AI Nurse Support (Morrisville: Syneos Health, 2018).

[22] European Commission, Ethics guidelines for trustworthy AI (Brussels: European Commission, 2019).

[23] Darrell West, The role of corporations in addressing AI’s ethical dilemmas (Washington: The Brookings Institution, 2018).

[24] International Telecommunications Union, Focus Group on “Artificial Intelligence for Health” (Geneva: ITU, 2019).

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.


Laura Sallstrom

Laura Sallstrom

Laura Sallstrom is an owner and member of the board of Access Partnership a public policy firm that provides market access for technology.

Read More +
Olive Morris

Olive Morris

Olive Morris is Policy Analyst at the New Center.

Read More +
Halak Mehta

Halak Mehta

Halak Mehta is Sr. Manager of Data &amp: Trust Access Partnership.

Read More +