Author : Siddharth Yadav

Expert Speak Digital Frontiers
Published on Dec 17, 2024

AI models are socio-technical products that often reflect historical and socio-political biases. As AI development is largely Western-led, addressing AI bias requires a global, multicultural perspective.

Global perspectives on AI bias: Addressing cultural asymmetries and ethical implications

Image Source: Getty

Introduction

Over the past two years AI, large language models (LLMs), and generative AI have become natural parts of tech jargon due to the meteoric commercial success of AI applications like ChatGPT. Generative AI now forms a large portion of the total investment in AI platforms. Currently, all of the biggest tech companies have made their own generative AI products available in their efforts to achieve AI supremacy. However, the story of AI development has also been marred by issues like copyright infringement, black-box algorithms, and liability issues, amongst others. A persistent problem associated with AI, in general, and generative AI, in particular, has been various forms of political and cultural bias in the outputs of AI systems. Over the past decade, there have been various instances where AI systems used in platforms like facial recognition, medical diagnoses and image generation produced racially biased outputs based on their unbalanced training datasets. The issue of AI bias made its way into the EU AI Act released in August 2024, the first formalised set of laws to regulate AI. The act recognises AI bias as having the potential to cause varying levels of social and economic harm and seeks to regulate AI systems accordingly. Although biases based on relatively narrow factors such as race, gender, and ethnicity have received significant attention from experts, activists, journalists and policymakers, broader and less obvious cultural biases in the outputs of AI applications remain pervasive. Given that AI development is increasingly becoming integrated with the economic and geopolitical strategies of many countries, biases arising from skewed training data sets of AI systems are becoming increasingly important, particularly for countries situated outside the Anglosphere. Without appropriate oversight, AI applications that are now being used by millions of users globally could exacerbate global cultural asymmetries.

A persistent problem associated with AI, in general, and generative AI, in particular, has been various forms of political and cultural bias in the outputs of AI systems.

What is AI bias?

Bias in AI primarily arises due to two issues: quantity and quality of data. Although it is possible to fix bias by changing the training procedure of AI systems, the source of bias is generally understood to originate in the training data itself. Training any language model requires selecting large quantities of texts and then categorising and filtering them. However, regardless of how high-quality the selected texts are, they would still constitute a small subset of the total number of texts existing on the web. Moreover, every piece of textual information on the web has its own limitations of scope, accuracy, bias and its implied worldview. A crucial factor that amplifies bias is the overrepresentation of certain elements pertaining to nationality, cultural perspective, ideas about gender, race, religion and so on. However, as scholars have noted, “excluding documents that belong to an over-represented domain/genre might lead to discarding high-quality information, whereas increasing the number of documents of a sub-represented class may require significant manual efforts.” The persistent issue of gender bias in AI models is illustrative of this problem. Studies have shown that historical stereotypes are reflected in AI text-generators that classify terms like ‘nurse’ or ‘homemakers’ as female identifiers, while terms like ‘manager’ or ‘CEO’ are classified as male identifiers. The problem of AI bias is further amplified with the introduction of AI image generators. For instance, in February 2024, Google had to pause the services of its Gemini AI due to a controversy regarding historically inaccurate images. When asked to generate images of German soldiers from 1943, the AI created some images depicting people of African and Asian descent in Nazi uniforms. The examples of cultural and racial stereotyping are numerous. Such as, when asked to create the image of “an Indian person” the AI invariably depicts an older man with a long beard and a turban; for Mexicans, it would predominantly create images of Mexican men wearing sombreros or would create images showing only polluted and littered streets of Indian cities. In addition to creating and perpetuating bias and stereotypes, a second-order problem arises of “vicious feedback loops“ wherein biased datasets lead to the creation of biased output which then becomes part of the new training dataset. As with most problems arising out of AI, the longitudinal effects of AI bias are unclear, but international multi-stakeholder collaboration will be needed to address the issue given the globally distributed nature of commercial AI use.

As with most problems arising out of AI, the longitudinal effects of AI bias are unclear, but international multi-stakeholder collaboration will be needed to address the issue given the globally distributed nature of commercial AI use.

Cultural and normative bias

In addition to text-based and image-based bias, another asymmetry that exists in AI systems is cultural bias. With the ongoing legislative and normative push for explainable AI (XAI) and trustworthy AI around the world, the conceptual frameworks within which AI systems operate also need to be examined. Studies conducted on XAI systems have revealed that such systems are in many instances, biased towards framing their output according to the values of Western, Educated, Industrialised, Rich and Democratic (WEIRD) countries. Furthermore, most XAI developers were shown to have little awareness of this bias. From a bottom-up perspective, the source of this framing bias precedes the stage where training data is selected. Since the largest AI developers hail from WEIRD countries, the demographic makeup of AI creators and programmers is skewed. It is reasonable to question how skewed demographics impact the behaviour of AI systems. However, given the black-box nature of AI algorithms, it is difficult to narrow down the relationship between causes and effects.

Recent years have also witnessed an emerging debate from a top-down perspective regarding the West’s normative dominance in AI ethics discussions. This dominance was highlighted in a 2020 study by ETH Zurich on AI ethics codes released by different countries throughout the 2010s. The study found that 82 percent of the codes were from Western countries, whereas contributions by other countries like India and China were practically none. A 2022 paper used the study to compare Western ethical principles for regulating AI and robotics with those of Japan. The paper identified that AI ethics discussions cited positive values associated with AI to a lesser degree than negative values. Moreover, this bias towards perceiving human-AI relationships as antagonistic is seen as reflecting a Western bias. Experts have even gone on to state that “Trustworthy AI is a marketing narrative invented by the industry, a bedtime story for tomorrow’s customers”, aimed at using ethics debates to promote lighter regulations. Although it would be irresponsible to portray AI systems as purely beneficial, the rationale behind promoting certain principles requires examination. A good example is the EU and its efforts in regulating AI, most recently with the 2023 EU AI Act. The EU is in an unfavourable position of lagging behind other major players like the US and China in AI development. Concurrently, it has invested in regulating AI more than most other countries. The emphasis on regulation has been seen as a way to carve out “a niche to impose itself as a major actor in the field of AI” and protect the EU market from outside players. The present argument is not that ethical principles like trustworthiness are unnecessary or inherently biased. On the contrary, the importance of AI ethics cannot be overstated. It is necessary to understand that bias in and around AI can exist in obvious as well as subtle ways. It is becoming increasingly easier to state that digital technology will impact societies economically, socially and politically in profound ways in the coming decades, even though there is no clear picture of how that will happen. Therefore, countries outside the West’s sphere will need to have national-level multilateral discussions about normative principles for governing AI that reflect cultural particularities to maintain their position and heritage in a quickly transforming world.

The emphasis on regulation has been seen as a way to carve out “a niche to impose itself as a major actor in the field of AI” and protect the EU market from outside players.

Going forward

The two major issues highlighted so far are explicit and implicit bias in AI outputs and AI discourse. AI developers should be instructed and incentivised to adopt guidelines and protocols to include diverse datasets during AI model training and development. Policymakers and international platforms like the Global Partnership on Artificial Intelligence (GPAI) can promote cross-cultural collaboration between AI developers, researchers and institutions from underrepresented domains and regions. Multilateral collaboration will be necessary to ensure global equity in AI development. To balance the dominance of Western countries in AI ethics discourse, countries outside the Western sphere should establish a coalition or use existing platforms like UNESCO to align AI standards with diverse cultural and societal norms. Adaptive ethical frameworks should be incorporated in policy discussions at the national and international levels to ensure that AI systems operate appropriately within different cultural contexts.


Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Siddharth Yadav

Siddharth Yadav

Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the ...

Read More +