Expert Speak Digital Frontiers
Published on Aug 31, 2022
Avatars in Cyberspace: Rights and Obligations for a Virtual Self

Psychology and media analyses have often referred to avatars and personas in entertainment media. These usually discuss how the persona, reflected and fetishized in the media, differs from the actual personality of the presented individual. In this article, we extend this concept to cyberspace and address the primary forms of avatar creation and their corresponding potential for misconduct. Further, we discuss the need for regulations and ethical codes to govern avatars.

Avatar Creation and Use

Social media and cyberspace are platforms that immediately ‘others’ all users due to decreased transparency. This othering has not only allowed celebrities to form personas, but also any user who has access to the platform. Currently, the internet—a user-focused platform—presents an issue around avatar protections and obligations. Thus, users require regulatory support which integrates governments, industries, and other users.

Based on user-avatar interaction, avatars can be categorised into two main forms: User-programmed, and Artificial Intelligence (AI)/Machine Learning programmed and assisted. These avatars can manifest personas in many ways. One example is by changing the users’ appearance with editing tools or ‘catfishing’. Another addition is user-programmed avatars for marketing, strategy, and product advertising. These are usually categorised under character merchandising and use Machine Learning (ML) or AI to create an interactive experience to cover the gap between users and industry.

Avatars are a manifestation of the self, beyond the realms of the physical. As user data is a requirement for avatar creation and data privacy is an essential pillar of any facet of the internet, confirming user identity for avatar creation becomes unethical.

In the realm of avatar usage, the most common issues are of identity and representation. Avatars are a manifestation of the self, beyond the realms of the physical. As user data is a requirement for avatar creation and data privacy is an essential pillar of any facet of the internet, confirming user identity for avatar creation becomes unethical. Therefore, avatar identity is self-defined and not pre-ordained.

While cyberspace idealism or the creative capacity of manipulating an avatar to represent the users’ desired existence is liberating in many ways, this concept of idealism brings forth discrepancies between individual identity, perceived identity, and community representation. Therefore, even identity-based actions of the avatar need platform-specific norms to avoid any incongruities concerning user data privacy. Users can also create multiple avatars and identities for themselves, from video game characters to message board monikers. These add to the complexity of ‘avatar accountability’ in cyberspace.

By not limiting avatars in numbers per user, cyberspace allows users to create online representations of others without their consent or verification. Though this is most often the case with celebrity impersonation and is countered with the law on impersonation, fraud, and defamation, it is, in many cases, not limited to the famous and can exploit the vulnerable. Users are currently not prohibited from creating profiles based on discriminatory practices and false/fictitious identities, thereby allowing them to upload sexually explicit images using deep fake and stolen imagery, thereby, ruining reputations. While user privacy requires protection, users cannot exploit the anonymity the internet provides to harm others.

Digital appropriation further augments avatar identity issues with more extreme cases of digital blackface.<1> When creating a false identity becomes a matter of using cultural identifiers, for character or personality merchandising, the lines become blurred. While companies often deploy this usage, individual users are not “characters” and thus, end up appropriating avatar representation.

When creating a false identity becomes a matter of using cultural identifiers, for character or personality merchandising, the lines become blurred.

Regulatory guidelines that assign a separate identity to the avatar from the user/programmer can govern both avatars used as an extension of the self and avatars that are non-human or programmed.

Creating Regulatory Accountability

To hold users directly responsible will require regulatory authorities to pre-develop rules in tandem with the ethics of confirming real-life communities and identities. Such a regulation will need to delineate issues like avatar creation based on idealism and identity theft, even if such misconduct is unintentional. Alternatively, global authorities—ideally one that aligns with no singular national cyber law or ideology—will need to form regulations created around the user to ensure limiting misconduct without incriminating unintentional offenders.

Regulations in this area will invoke two main principles:

  • To create an avatar’s identity in cyberspace as a separate individual and limit user creation of avatars to a certain number per user, and
  • To treat avatars as separate entities (congruent to companies’ and proprietors’ existence). Both methods hinge on creating a distinct legal identity for avatars.

These methods will address the issues previously alluded to without disrupting user experience or data privacy regulations.

If an avatar is user based, any legal processes around avatar actions will not compromise user privacy. Limiting the number of avatars an individual can create will also serve as a deterrent to malicious intent by individuals for identity theft, lewd imagery, ruined reputations, and other forms of harassment.

In the case of an accidental occurrence of avatar-based misconduct, such as appropriation that was unintentional by users; unintentional harassment in game spaces; or unintentional identity imitation, the user who created the avatar is still protected, creating autonomy around the avatar and treating it as a separate entity from its programmer. Protecting the unempowered and unintentional offender will hold the incriminated actor responsible, creating greater accountability for the platform and user.

Such regulatory support will also assist in cases where avatars are based on ML programming and adapt based on interactions with other avatars. The programmer—who may have created the avatar based on the then contemporary ethical guidelines—will not be held accountable for any actions of the avatar. The repercussions of the misconduct will result in reprogramming the avatar rather than banning a programmer based on unintentional malpractice.

Protecting the unempowered and unintentional offender will hold the incriminated actor responsible, creating greater accountability for the platform and user.

Aside from these regulations on avatar autonomy, regulators will need to create globally relevant ethical guidelines around their usage and identity. To counter issues like identity theft, the rules recommended above may be helpful; however, ethical guidelines are more appropriate for soft misconduct, such as digital blackface, indirect harassment, or defamation/reputation assault. Such ethical guidelines must include a code of conduct that are platform-specific and nurture user experience.

Platform-specific codes of conduct, aside from general ethical guidelines for cyberspace, are necessary due to the vast cultural difference, not just between the users but also between different platforms. These should cover standards that impose the reasonable obligations to refrain from rape, theft, murder, assault, slander, and fraud and secure the right to life, the right to privacy, and the right to equal opportunities and accessibility to services. Ethical guidelines do not act as law; thus, platform specific codes of conduct will augment them.

Conclusion

Creating regulations, ethical guidelines, and platform-specific codes of conduct in tandem with one another will pre-empt the possibility of enhanced misconduct, including neurological linkages and the Proteus effect.<2> Thus, it is essential to draw from existing cyber laws to create regulatory backing for the decentralised metaverse and create ethical guidelines and platform-specific codes of conduct for the jurisdictional predecessor of the metaverse.


<1> The use by white people of digital depictions of Black or brown people or skin tones especially for the purpose of self-representation or self-expression.

<2> The Proteus effect is the tendency for people to be affected by their digital representations, such as avatars, dating site profiles, and social networking personas.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Shravishtha Ajaykumar

Shravishtha Ajaykumar

Shravishtha Ajaykumar is Associate Fellow at the Centre for Security, Strategy and Technology. Her fields of research include geospatial technology, data privacy, cybersecurity, and strategic ...

Read More +