This article is part of the series—Raisina Edit 2024
Amidst growing geopolitical competition over Artificial Intelligence (AI) and a flood of regulatory discussions seeking to shape AI’s evolution and use, the recent Raisina Dialogue posed a compelling question: Whose AI is it anyway? Current discussions focus on ownership and power when answering this question. This advances individualistic understandings of AI and facilitates a zero-sum approach to AI development and regulation. We should employ an inductive approach instead to ensure AI benefits the many as well as the few.
The issue of ownership
Much discussion of AI is proprietary. While traditionally training data, models and tools were shared freely among the AI community, these norms began to change when OpenAI refused to share its GPT-2 model in 2019. Other companies have since followed suit. In both China and the United States (US)—the world’s AI giants—private sector AI spending focuses on application-oriented research with commercialsation potential. That spending totalled US$91.9 billion globally in 2022, US$47.4 billion of which was located in the US and US$13.4 billion in China. PricewaterhouseCoopers estimates AI-related productivity gains and increased demand for “AI-enhanced products” will add US$ 15.7trillion—or a 14 percent gain—to the global economy by 2030.
The potential to assert or retain ownership of advanced AI technologies can alter corporate behaviour, as tech giants seek to secure investments and their competitive edge over rivals.
Owning AI models and products is clearly profitable, but the influence of ownership is more expansive than the bottom line. The potential to assert or retain ownership of advanced AI technologies can alter corporate behaviour, as tech giants seek to secure investments and their competitive edge over rivals. AI ownership also intersects with the issue of data ownership and privacy. This can shift social norms and systems to support a “technological colonialism” characterised by neither compensation nor control for data providers when their data is used to train AI models. Outsourcing AI-related human labour to countries of the Global South in an attempt to implement ethical guidelines may lead to further exploitation.
Establishing ownership is not simple. Legal cases addressing issues of ownership and commercialisation, such as Elon Musk’s recent suit against OpenAI, are proliferating, and the jurisprudence necessary to address them is still evolving. Stephen Thaler’s failure to secure a patent based on machine ownership late last year highlights another issue: Who can legitimately claim ownership of an AI product? This is a crucial question as determining ownership has implications for how benefits and costs are distributed among individuals and societies.
In fact, research indicates economic gains arising from AI will be unequal. Despite AI’s potential to achieve the UN’s Sustainable Development Goals, the International Monetary Fund (IMF) estimates developing and emerging economies will accrue fewer benefits from AI than advanced economies. Positive changes are also likely to happen more slowly for low-income economies according to a survey of chief economists released earlier this year. And inequalities are increasing within countries as well, for instance, via gender gaps in hiring or AI tools that prioritise corporate efficiency over racial equity.
Positive changes are also likely to happen more slowly for low-income economies according to a survey of chief economists released earlier this year.
The question of power
In contrast, a power perspective focuses on the political legitimacy of AI use and governance. According to Mark Suchman, when people attribute legitimacy to something, they consider it materially useful, the “right thing to do” and/or something that makes sense within the context of their daily lives. Put differently, attributing political legitimacy represents a claim by users that AI is theirs.
In fact, studies have shown the importance of affected populations attributing political legitimacy to digital technologies. It becomes even more important as government uses of AI tools proliferate. Recent examples include India using AI to identify fake fingerprints in its biometric identification programme Aadhaar, China and Japan implementing AI-powered predictive policing and Canada’s “bomb-in-a-box” programme identifying dangerous cargo.
Yet civil society actors—whose input could greatly enhance legitimacy attribution—largely remain marginalised in national discussions of AI governance. This is particularly true in the Global South. Simultaneously, the complexity and speed of development of AI technologies often makes governments reliant on tech companies when determining AI risks, raising the potential that private interests dominate public ones when writing AI rules.
Power inequalities extend to the global realm as well. While highlighting convergence around ethical principles like transparency, fairness and privacy, Anna Jobin and colleagues are quick to point out that “global regions are not participating equally in the AI ethics debate” with advantages accruing to “more economically developed countries.” Geopolitical posturing by the US and China is likely to make this true more broadly within global AI governance discussions. The result is the absence of much of the world population’s interests and concerns in global AI governance.
Building on initiatives established during India’s G20 presidency in 2023, India is using its chairship of the Global Partnership on AI to promote issues of global inclusiveness, ethics and social justice.
This is not to say that most of the world is powerless when it comes to exerting a claim on AI. Building on initiatives established during India’s G20 presidency in 2023, India is using its chairship of the Global Partnership on AI to promote issues of global inclusiveness, ethics and social justice. Discussions in Latin America and Africa are highlighting regionally relevant issues and charting paths to enhance human capabilities in AI. And members of the UN’s AI Advisory Body from G77 countries are twice as numerous as those located in G7 countries, implying at least some gains in inclusiveness at the global level.
Moving discussions from “mine” to “ours”
Eliminating the advantages conveyed by economic ownership and political power is unlikely. So too is undermining the mutually beneficial dynamic that sees strengths in power compounding strengths in ownership and vice versa. So how can we create an AI which we can all claim as our own? How can we move global AI debates from “mine” to “ours”?
The dominant focus on minimizing risks and maximizing innovation prioritises individualistic gains in AI capacity and control. This is evident in corporate competition to secure top AI talent, in mercantilist policy measures prioritizing national gains—in other words, in a zero-sum approach to understanding, developing and regulating AI technologies.
An inductive approach would analyse data to identify patterns of risk and reward and use these to generate solutions and strategies.
In contrast, an inductive approach to AI gains and risks would be more inclusive and more likely to produce behaviors targeting social, rather than individual, good. Like AI itself, an inductive approach would analyse data to identify patterns of risk and reward and use these to generate solutions and strategies. Two steps are required to implement such an approach.
The first involves broadening input into AI regulatory discussions. “Thought leadership” by states of the Global North complicates efforts to address Southern concerns, even when they are housed under the same discussion topic. For instance, while workers’ rights are a widely discussed, the focus tends to be on workers soon-to-be displaced by AI—primarily an issue for high-income economies—rather than on the physical and mental health strains suffered by Southern workers already employed by AI companies. Similar gaps exist among AI startups and large tech corporations. Consulting a broader segment of people affected by AI will diversify the data available to policymakers when assessing the potential risks and gains of AI for affected populations.
The second step emphasises collaboration and adaptation over creation. The first-mover advantage in AI lies solidly in the US, China, and perhaps Europe (which passed its AI Act last week). In this context, collaborations, such as the recent partnership between India’s Sarvam AI and Microsoft to build a large language model for India or Ugandan government projects with Sunbird AI, are crucial. Drawing on private and public sector strengths at home and abroad can help sidestep challenges related to cost, infrastructure and data access. Collaborations also facilitate cross-border adaptation of AI use cases by building the working relationships and local datasets necessary to make gains in one location transferrable to another. This route can consequently help us generate new paths towards shared developmental and AI goals by grounding them in past experiences and enriching them with new connections.
Drawing on private and public sector strengths at home and abroad can help sidestep challenges related to cost, infrastructure and data access.
An inductive approach to understanding AI development and policy will not solve all the problems accompanying AI’s incursion into our lives. It cannot force policymakers to enhance legal protections for data providers and workers in the Global South, who remain vulnerable to exploitation. Nor can it fully address inequalities in education and resources which continue to hamper progress in some parts of the world. What it can do is provide a more substantial data basis for AI discussions and activate a broader coalition of engaged actors pushing for these data to be incorporated in final policy decisions. That’s good for all of us.
Laura Mahrenbach is an Adjunct Professor at the Technical University of Munich
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.