-
CENTRES
Progammes & Centres
Location
Global AI governance must empower LMICs to lead and shape health innovation through inclusive, context-driven collaboration
Image Source: Getty Images
This article is a part of the essay series “U.S.-India AI Fellowship Program”
In an era where Artificial Intelligence (AI) is reshaping every facet of society, establishing ethical governance frameworks is an imperative. According to the European Union (EU), AI comprises systems that display intelligent behaviour by analysing their environment and autonomously taking actions to achieve specific goals. Complementing this definition, UNESCO’s Recommendation on the Ethics of Artificial Intelligence describes AI ethics as a dynamic, holistic framework—rooted in human dignity, well-being, and the prevention of harm—that guides societies in navigating the multifaceted impacts of AI on human lives, communities, and ecosystems.
Despite India's rapid strides in AI research, a burgeoning talent pool, and significant contributions to open-source projects and hiring trends, it is represented in only 0.5 percent of AI ethics publications.
Global discourse on AI ethics has coalesced around core principles such as transparency, justice, non-maleficence, accountability, and privacy. Meta-analyses, including a landmark study by Jobin et al. (2019) and a more expansive review by Correa et al. (2023), reveal a robust commitment to these values. Yet, a closer look exposes significant disparities in global conversation. While Western Europe and North America dominate the narrative, voices from the Global South are notably sparse. For example, despite India's rapid strides in AI research, a burgeoning talent pool, and significant contributions to open-source projects and hiring trends, it is represented in only 0.5 percent of AI ethics publications.
This disconnect poses a critical challenge: as AI drives healthcare innovation, ethical frameworks must be contextually adapted to diverse socio-economic and cultural realities. Without such adaptation, AI may deepen existing disparities rather than fostering equitable progress.
The rapid expansion of AI has sparked a global conversation about the ethical principles that should guide its development and application, particularly in sensitive sectors like healthcare. Numerous private, public, and multilateral organisations have released AI ethics guidelines, and while a growing convergence has emerged around foundational principles, stark disparities remain in whose perspectives are shaping these norms.
A meta-analysis by Jobin et al. (2019), which reviewed 84 ethics documents, identified five core tenets across frameworks: transparency (86 percent), justice (81 percent), non-maleficence (71 percent), accountability (71 percent), and privacy (56 percent). However, a more recent and expansive meta-review by Correa et al. (2023) covering over 200 global AI ethics documents revealed a significant concentration of authorship: 66 percent of all publications originated from Western Europe and North America, while less than 5 percent came from regions such as Africa, South Asia, and Latin America.
LMICs are frequently faced with a difficult choice: adopt ethical frameworks that are poorly aligned with their local contexts, or attempt to develop fragmented and under-resourced governance systems without adequate global recognition or support.
Moreover, 77 percent of all documents analysed were produced by just 13 countries, highlighting a highly centralised global discourse. Intergovernmental organisations such as the EUnion (4.5 percent) and the United Nations (3 percent) contributed only marginally. These findings underscore the systemic underrepresentation of voices from low- and middle-income countries (LMICs) in shaping the ethical governance of AI, raising concerns about the legitimacy, inclusivity, and contextual relevance of current global standards.
This imbalance in authorship and thought leadership has led to a global AI governance architecture that often fails to account for the public health priorities, infrastructure constraints, and health data systems concerns specific to LMICs. As a result, LMICs are frequently faced with a difficult choice: adopt ethical frameworks that are poorly aligned with their local contexts, or attempt to develop fragmented and under-resourced governance systems without adequate global recognition or support.
To move toward sustainable and equitable AI for health ecosystems, international policy frameworks must be reshaped with LMICs—not for them. This requires centring LMIC voices, leadership, and lived experiences in both the design and implementation of global AI governance. To truly foster meaningful partnerships between HICs and LMICs, these frameworks must evolve from top-down, one-size-fits-all approaches to ones that are inclusive, adaptive, and grounded in contextual realities.
Key Strategies for Strengthening Frameworks Through LMIC Collaboration:
By realigning international frameworks to reflect the needs, knowledge, and leadership of LMICs, the global AI health ecosystem can move toward a more inclusive and collaborative model, where innovation is not only driven by equity but governed by it. LMICs have the insight, experience, and urgency to lead this shift—what they need is enabling global structures that recognise and invest in their capacity. This is not just a matter of fairness; it is essential for creating governance systems that are globally legitimate, locally actionable, and resilient in the face of rapid technological change.
Resham Sethi is a part of the U.S.-India AI Fellowship Programme, organised by ORF America and ORF.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Resham Sethi is a part of the U.S.-India AI Fellowship Programme, organised by ORF America and ORF. ...
Read More +