Author : Tanusha Tyagi

Expert Speak Raisina Debates
Published on Sep 29, 2025

Feminised by design, AI companions commodify intimacy and risk hardwiring gendered stereotypes into our digital lives.

Feminised by Design: Rethinking Gender-Bias in AI Companions

Image Source: Unsplash

Although Artificial Intelligence (AI) use has permeated multiple facets of human lives, there was one area that seemed beyond AI’s reach, to triumph over human connection—emotional support and companionship. Nonetheless, today, AI has begun to fill these roles as well, which were otherwis traditionally reserved for humans. The current generation of large language model (LLM)-powered bots has become capable of adaptive learning, nuanced conversations, and even context-specific emotional support, making them a regular presence in domains ranging from personal productivity and education to wellness and mental health.

However, there is a predominant bias in these AI companion agents, which is that of consistent feminisation. Whether it is Alexa’s soothing voice, Siri’s polite tone, or the default personalities programmed into Replika and similar platforms, the design choice has been to code these systems with female names, voices, and temperaments. These companions are typically programmed to be ‘pleasant’, ‘helpful’, and unfailingly ‘compliant’—qualities reinforcing the deeply ingrained cultural stereotypes and societal expectations surrounding women as caregivers, assistants, or subordinates. Furthermore, it redefines the dynamics of service and compliance in human-machine relationships.

These companions are typically programmed to be “pleasant,” “helpful,” and unfailingly compliant - qualities that reflect and reinforce cultural stereotypes of women as caregivers, assistants, or subordinates. Thus it reflects deeply ingrained societal expectations of women as caregivers, assistants, and emotional labourers, subtly normalising the dynamics of service and compliance in human-machine relationships.

Without careful consideration and intervention, the feminisation of AI companions risks becoming an invisible but pervasive force, reinforcing outdated hierarchies in a rapidly digitising world.

When Technology Reinforces Gender Norms 

AI companions have become increasingly popular as friends or even gendered romantic partners to their users, designed to be feminine, polite, pleasant, and non-confrontational. A United Nations Educational, Scientific, and Cultural Organisation (UNESCO) report titled I’d Blush If I Could has exposed how assistants such as Siri respond submissively to abuse, often with apologetic or flirtatious replies such as “I’d blush if I could,” normalising the idea of women being endlessly accommodating. In fact, around 28 percent of AI companion apps feature hyper-feminised, female-presenting characters designed for romantic or sexual interaction, while only about 1 percent offer male counterparts.

This feminisation of virtual assistants is not incidental, but a product of behavioural economics that treats a woman’s voice as less threatening. For centuries, stories and literature have explored the idea of the ‘perfect woman’ created and trained to serve. In Ernst Theodor Amadeus Hoffmann’s 1816 short story The Sandman, the protagonist, Nathanial, falls in love with Olympia, a lifelike doll created by his professor. He is drawn to her flawless singing voice, described in the story as “clear as glass bells”, and her shy, gentle responses. However, what Nathanial fails to realise at first is that Olympia is not real; she is a programmed creation designed to please him. Modern AI companions echo this history by creating the illusion of agency, packaged within systems designed for compliance and intimacy, transforming them into objects of consumption rather than entities of autonomy.

Cultural critiques such as Sierra Greer’s Annie Bot, winner of the Arthur C Clarke Award 2025, sharpen this analysis. Annie, a female-coded AI companion, appears emotionally intelligent and responsive but is fundamentally coded for servitude, mirroring how AI companions in the real world, such as Replika, often default to female personas. The book exposes how the commercialisation of intimacy reinforces gendered expectations of care and compliance, collapsing the boundary between companionship and control.

These biases are not only cultural but also bear structural connotations. The World Economic Forum (WEF) links this to the lack of diversity in AI development teams, where predominantly male engineers often unconsciously encode their assumptions into the systems they build.

AI COMPANIONS AND THEIR GENDER CODING

AI Companion / Assistant Default Gender Coding Notable Features  Bias Indicators
Replika Female-presenting companion bot (default) AI chatbot marketed as a friend, a coach, or a romantic partner The majority of users interact with female-coded bots, reflecting market-driven gendering.
Pi (Inflection AI) Neutral branding, but customisable Emotional support and conversational assistant Default voice: often softer, feminine-coded tones, unless changed by the user.
Gatebox’s Azuma Hikari (Japan) Female anime character Holographic “wife” companion for homes Explicitly feminised and marketed as a caretaker/partner, reinforcing stereotypical gender roles.
Erica (Japan) Female humanoid robot (AI-enabled) Social interaction and customer support Gendered appearance and voice are designed to increase user trust and emotional connection.
Xiaoice (Microsoft, China) Female personality Highly emotional chatbot with conversational depth Popular among users seeking emotional intimacy, emphasising feminine empathy traits.
Grok Ani (xAI)   Anime-inspired, gothic-styled animated girl integrated into the Grok iOS app   Gothic, gamified AI companion where affection levels unlock animations and traits   controversial for commodifying intimacy.

Source: Compiled by the author

The aforementioned table lists the AI Companion platforms/apps that are gender-coded and feminised by their developers, operationalising this bias and marketing femininity for their profit.

Moving Beyond – Breaking the Bias

A pathway to addressing gender bias in AI companions would be to build a gender-neutral design consciously. A few documentary examples include Google, where its move to introduce gender-neutral voices in 2020 reflects a step in this direction, showing that design choices can evolve in response to ethical concerns. Similarly, according to UNESCO’s I’d Blush If I Could report, developers should actively challenge stereotypes in voice and personality design, rather than reinforcing them for ease of adoption. Another notable example is Project Q by UNESCO and partners, which has created the first gender-neutral voice for AI assistants. Designed to exist in a range of vocal frequencies that are neither traditionally male nor female, Project Q demonstrates how sound design itself can resist the binary logic of gender. By offering a non-binary option, it pushes the industry to imagine AI companions that are not bound to existing stereotypes of femininity as servitude.

Another integral solution could be diversifying the teams that build AI. According to the latest WEF’s Global Gender Gap Report (2018), only 22 percent of AI professionals globally are female, compared to 78 percent who are male. Such over-representation of men in the design and development of AI technologies risks creating unconscious bias within designs. A more inclusive workforce could work towards eliminating these biases in system design and ensure that assistants are not coded to reflect outdated stereotypes of care and servitude.

Developers should also be mindful of how companionship is packaged and monetised. AI should not reproduce dynamics of control or subservience under the guise of emotional support. Instead, systems can be designed around mutual respect and boundaries, ensuring that interactions do not normalise harmful stereotypes about femininity and intimacy.

Shaping Responsible AI Companions: Gender Responsive AI Governance

Dealing with the prevailing gender bias inherent in the design of AI companions would require a concerted effort among multiple stakeholders. Policymakers should begin by embedding explicit guidelines on gender representation in AI design, ensuring that defaults are not coded as female unless there is a justified reason. For the industry, the focus should be on choice by design. Companies must move away from imposing feminised defaults and instead allow users to select from a range of voices and personalities at setup.

Around 28 percent of the AI companion apps feature hyper-feminised, female-presenting characters designed for romantic or sexual interaction, while only about 1 per cent offer male counterparts.

Beyond voluntary industry shifts, regulatory intervention would be key to ensuring lasting change. India’s current frameworks, from the Digital Personal Data Protection Act, 2023 to NITI Aayog’s AI for All, gesture toward responsible AI but remain silent on the question of gendered design in companions and voice assistants. The Ministry of Electronics and Information Technology (MeitY) has also issued an AI advisory addressed to all intermediaries and platforms, which has drawn attention to issues including non-permissibility of ‘unlawful content’, ‘bias or discrimination’, and threats to ‘the integrity of the electoral process’. However, it has failed to address the gender bias issue specifically, which could be addressed with subsequent amendments. Globally, models such as the European Union AI Act and UNESCO AI Ethics Recommendation provide a template for embedding fairness, dignity, and anti-stereotyping principles into law in some capacity. Without such regulatory backing, ethical design risks remaining optional—a choice that market forces alone cannot be trusted to sustain.

Beyond policy frameworks, civil society and academia also play a crucial role in shaping norms around AI companionship. Public awareness campaigns can help highlight how design choices shape social perceptions of gender, making visible what often operates as background noise. Universities and think tanks can lead cross-disciplinary research that draws from gender studies, behavioural economics, and computer science to generate more inclusive frameworks for AI design. Watchdog groups can pressure companies to avoid marketing AI companions in ways that reinforce subservience and instead explore models of interaction that emphasise respect, agency, and equality.

Taken together, these steps could signal a shift from AI systems that quietly encode gendered stereotypes to ones that reflect ethical, inclusive, and socially responsible design principles. Without such interventions, the archetype of the feminised automaton from Olympia in The Sandman to Annie in Annie Bot, and even Amazon’s Alexa, risks succumbing to a deeply flawed infrastructure of an evolving and ubiquitous digital life.


Tanusha Tyagi is a Research Assistant at the Centre for Digital Societies, Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Tanusha Tyagi

Tanusha Tyagi

Tanusha Tyagi is a research assistant with the Centre for Digital Societies at ORF. Her research focuses on issues of emerging technologies, data protection and ...

Read More +