Expert Speak Digital Frontiers
Published on Oct 18, 2021
Inclusion 2.0 can only be achieved when strands of access integrate an understanding of the history of oppression, feminist perspectives, and ethnic perspectives while simultaneously constantly integrating user feedback
Towards Digital Inclusion 2.0

As the world continues to reel from the ongoing pandemic, digital platforms have been  agents of hope. However, while several technological advancements have proven that technology can seamlessly fit into our lives, meaningful and inclusive access to all has lagged behind. On 11 June 2020, United Nations Secretary-General António Guterres presented a set of recommended actions for the international community, which sought to achieve universal digital inclusion for all by strengthening digital capacity and ensuring human rights in the online world. The timeframe etched out was up to 2030, giving stakeholders nine years to ensure a healthy digital space that is inclusive regardless of ethnicity, nationality, gender, and disability.

Inclusion 2.0 is a work-in-progress. Therefore, now is the time to ask, what kind of internet do we need to unlock socio-economic opportunities for people in emerging markets? Even if we solve vital issues like access, affordability, and efficiency, what will the next billion internet users find when they get online? Will it interest them? Will it improve their lives? Will they be able to shape the internet to ensure that it does?

Barriers to inclusive technology

‘Inclusive technology’ bestows benefits upon its users, and their experience is customised to their specific needs in the digital space.

Connectivity is a prerequisite to avail basic services—mostly dependent on mobile technology—like education. Establishing online schools that train young talent to navigate the digital world effectively while also acting as an accelerator and providing digital services to aspiring entrepreneurs across socio-economic barriers is crucial. Another core challenge to inclusion is the fact that an inclusive lens does not mean the same set of actions for all. It is particularly difficult where it is most needed—in developing countries as they grapple with multiple development goals with limited resources. Equally important is identifying the marginalised groups or stakeholders that would most need targetted support.

AI-enabled content moderation tools that can automatically detect online hate have been gaining popularity. However, periodic monitoring, complaint, and redressal mechanisms accompanied with a human in the loop play a pivotal part in keeping AI-based content moderation tools tailored and customised according to user preference lest they overregulate speech

Apprehension over encountering online hostility also keeps many voices at bay. Some measures like flagging have proven to be ineffective and not easily scalable. These also carry a risk of unfairness through subjective judgments by human annotators. Hence, AI-enabled content moderation tools that can automatically detect online hate have been gaining popularity. However, periodic monitoring, complaint, and redressal mechanisms accompanied with a human in the loop play a pivotal part in keeping AI-based content moderation tools tailored and customised according to user preference lest they overregulate speech. There is also a lack of universal classifiers of hateful content, leading to a lack of testing of models using data from multiple social media platforms. This deficiency results in “barriers to entry” for researchers who lack the skills for model development but would be interested in interpretative Online Hate Research (OHR). Additionally, how digital platforms deploy technology to implement shadow banning on content published by certain individuals needs to have a uniform approach across platforms and regions. Last year, TikTok faced flak from Black creators who alleged that their content on the Black Lives Matter movement was being taken down, muted, or hidden from followers. This incident follows TikTok’s earlier admission to suppressing posts from physically disabled, overweight, and LGBTQ users as the platform identified them to be “vulnerable to cyberbullying”. In light of such incidents, it becomes crucial for AI systems, which act as navigators for users to actively resolve bias against certain groups and communities, to redefine inclusion.

Building a robust framework for inclusion

Benefits of innovation need to be more equally shared, with the consideration of whose needs are being met by innovation and how excluded social groups could be better served while focusing on initiatives that promote broad participation in innovation, priority setting, and governance of innovation. There are three critical elements around which a robust framework for inclusive digital tools should be moulded:

  • Listening and learning: Actively monitoring pain points for users based on community, gender, and ethnic lines and incorporating suggestions into the tool. An easy-to-follow guide that informs the user of the requisite steps to register a complaint and incorporating linguistic directions and categorisation can be helpful in addressing hateful content online. Developers of digital platforms should incorporate regulatory sandboxes, which enable businesses to test innovative propositions with real consumers in a controlled environment. The main goal here would be to eradicate or minimise elements that cause alienation to certain groups.
  • Creating allies: This serves the purpose of effective collaboration with diverse voices and building trust. The term ‘allyship’ denotes a lifelong process of building relationships based on trust, consistency, and accountability with marginalised individuals and/or groups of people. One of the ways to build allyship at the basic level would be to ensure proportional representation from diverse ethnicities, communities, and genders in the design, development, and management stage of a digital tool. In 2018, Microsoft created Xbox Adaptive Controller (XAC), which opened up the world of gaming to an often neglected, under-represented group—people with disabilities. Rory Steel, a father of two young children with hereditary spastic paraplegia, went on to create an arcade-like controller, combining the XAC and the Nintendo Switch.In this example, Microsoft not only launched an inclusive product but also demonstrated allyship by reaching out to Steel as the tech giant wanted to work together with him to create an even more refined version of his modified XAC controller.
  • Engaging with Context: If users do not buy into or fully understand the transformations that a tool proposes, it will lead to failure. To foster a stronger digital enablement, it’s important to understand the different journeys people are on and what will encourage them to change how they interact. Ethnographic methods can be used to gather insights for human-centred design approaches that could empower minorities, and provide them a valuable user experience across all touchpoints. For instance, acknowledging language barriers and adopting ways to address them is critical. Earlier this year, Apple added two new voices to Siri’s English offerings and eliminated the default “female voice” selection in the latest beta version of its operating system. This addition means that every person setting up Siri will choose a voice for themselves. It will no longer default to the voice assistant being female, a topic that has come up quite a bit regarding bias in voice interfaces over the past few years. Inclusion of a ‘black Siri voice’ came in after friction arose over a Stanford study showing that speech recognition systems have more trouble understanding black voices than White voices.

Inclusion 2.0 can only be achieved when strands of access integrate an understanding of the history of oppression, feminist perspectives, and ethnic perspectives while simultaneously constantly integrating user feedback.


Swati Sudhakaran, Editor at AI Policy Labs, supported the author in writing the article.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Uday Nagaraju

Uday Nagaraju

Uday Nagaraju Founder &amp: CEO of AI Policy LabsUday is the Founder &amp: CEO of AI Policy Labs a global public policy think tank focusing ...

Read More +