Generative AI has taken the world by storm since the end of 2022. It is not only affecting the current state and future trajectory of artificial intelligence (AI) but also cutting across sectors to produce unanticipated changes in the way people engage with technology. While this has resulted in new ways of working with or around generative AI, especially for creative fields, it has also (re)sounded the alarm on deep-seated AI-associated drawbacks and concerns relating to deep fakes, misinformation, and biases.
Gender biases, in particular, have been an oft-cited but less often resolved by-product of the use of AI. A study, in the Stanford Social Innovation Review, of publicly available information on 133 AI systems, deployed across different economic sectors from 1988 to 2021, found that 44 percent exhibited gender bias and 26 percent exhibited both gender and racial biases. This gender bias seen in AI systems has had varying types and levels of effects on women, ranging from prejudiced hiring systems that eliminate women from applicant pools for no reason other than their gender, to products and services not being designed with the female experience in mind, leading to exclusive harms borne by women as a result of their omission.
Lensa AI, a viral avatar creation app running on generative AI software, has been known to hypersexualise and fetishise the images of women that it creates, while images of males are fashioned into typically masculine avatars like warriors and astronauts without any added nudity.
Generative AI, which has been hailed as the much more powerful and versatile counterpart of AI systems, has unfortunately not been immune to the percolation of gender biases. In fact, given its higher ‘creative’ capacity owing to an almost solely unsupervised self-learning large language model (LLM), generative AI has been known to exponentially worsen existing gender biases to a point where they are exaggerated even beyond grim realities.
For instance, prompting generative AI-based image generation software Stable Diffusion to display an engineer returns images of only men, even though women actually make up about one-fifth of engineers. Apart from being misleading, generative AI also has a tendency to cater to an overwhelmingly male audience. Lensa AI, a viral avatar creation app running on generative AI software, has been known to hypersexualise and fetishise the images of women that it creates, while images of males are fashioned into typically masculine avatars like warriors and astronauts without any added nudity.
ChatGPT-4, a leg up from the wildly popular generative AI virtual assistant ChatGPT, is the latest addition to what the author considers ‘gender-ative AI,’ i.e. generative AI systems that are categorically gender biased. It has been known to peddle familiar stereotypes against women in the text it generates, which have not been fixed despite OpenAI’s risk assessment within this context. The recurring and disturbing gender biases in the nascent and ever-evolving generative AI domain point towards a systemic flaw in the resolution of these biases at a fundamental level. While the particular processing of information by LLMs through the use of deep neural networks is what tends to perpetuate gender biases in generative AI, the root cause of their emergence is the use of misrepresentative or even borderline unethical datasets available in the public domain. Generative AI training datasets, for instance, have been known to include malignant stereotypes and explicit imagery, resulting in the kind of outputs described above.
The recurring and disturbing gender biases in the nascent and ever-evolving generative AI domain point towards a systemic flaw in the resolution of these biases at a fundamental level.
This is usually due to the marked lack of available gender-representative datasets and the overarching low prioritisation of collecting and analysing gender-disaggregated data. Often, this absence of gender-affirmative practices and/or prioritisation across domains is not a consciously enforced decision, but an oversight due to the glaring lack of female leaders who can institutionalise changes from the top down. The situation is so dire that there are more CEOs named Peter in the UK than women who are CEOs, and similar mind-boggling statistics have been observed with respective common male names in about 10 European countries in 2023. Apart from the dismal rates of women in tech on the whole, the software developing industry, in particular, consists of about 92 percent men, making the presence of gender biases in emerging general-purpose technologies like generative AI unfortunate but wholly unsurprising. The lack of women in the popular perception of these technologies by the larger public through the media and creative industries also adds to the continuous invisiblisation of women in tech. A recent study showed that only nine out of 116 AI professionals in key films were women, half of whom were in positions of subordination to male characters or depicted as caricatures of actual professionals.
The lack of women in the popular perception of these technologies by the larger public through the media and creative industries also adds to the continuous invisiblisation of women in tech.
The dearth of women and the resulting issue of representation in the generative AI domain is not a new or exclusive problem, which is why the solutions for it are also abundantly clear. Apart from innovative approaches like gender bias mitigation tools for generative AI or women-specific generative AI softwares, it is essential that there are concerted efforts to close the gender data gap before gender bias becomes an inextricable component of emerging generative AI ecosystems. The tech industry in particular could also benefit from gender-sensitisation trainings, as well as hiring and elevating women and professionals who understand that the gender-tech nexus is vital to ensure the sustainability of these endeavours.
Shimona Mohan is a Research Assistant at the Centre for Security, Strategy and Technology (CSST), Observer Research Foundation
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.