Author : Amoha Basrur

Expert Speak Digital Frontiers
Published on Sep 27, 2024

AI-driven information access enhances how people find and consume information, but it also poses ethical challenges that must be addressed to ensure universal access

Ethical considerations in AI-driven access to information

This article is a part of the essay series: “The Freedom to Know: International Day for Universal Access to Information 2024 


Artificial Intelligence (AI) has been the culmination of the information age. It is the result of decades of advancements in data processing and machine learning and has conversely led to these systems increasingly governing the flow of information today. AI has been hailed as a great equaliser- promising to revolutionise how people access, interpret, and share knowledge. However, its applications range from translation tools and chatbots to content filtering and censorship tools. Moreover, questions about bias, transparency, and accountability often remain unanswered. The unprecedented avenues that AI has created for information dissemination and control have a flip side. These advancements come with significant ethical considerations to ensure that AI-driven information systems serve society equitably and responsibly.

The unprecedented avenues that AI has created for information dissemination and control have a flip side.

The transformative potential of AI

Historically, barriers to information have been shaped by factors such as geography, language, and technological literacy. As the technology has advanced, AI has been leveraged to overcome these hurdles and democratise access to information.

  • Access to healthcare information

Since the COVID-19 pandemic, there has been a proliferation of AI tools to increase access to healthcare. To overcome the challenges in communication between healthcare professionals and deaf patients during the pandemic, a prototype system was developed to automatically translate diagnostic phrases through a computer-generated signing avatar. The World Health Organization (WHO) launched an AI-powered digital health worker called Florence, designed to share public health messages on the complications that tobacco caused during COVID-19. In 2024, this model was further developed and launched as an AI-powered digital health promoter called S.A.R.A.H. (Smart AI Resource Assistant for Health). This digital assistant is intended to improve access to reliable health information and promote health equity worldwide. It uses generative AI to provide 24/7 personalised and human-like responses to users to understand health risks and make informed decisions.

  • Access to education

AI has allowed online learning platforms to hyper-personalise education. Platforms like Khan Academy and Byju’s have launched a suite of AI models to personalise learning and improve educational outcomes. Coursera has greatly widened access to its courses by using AI to translate 4000 of its courses into Hindi. In Korea, EBS launched an AI-based Conversational English programme called AI-Pengtalk to bridge the English language proficiency gap associated with parental socioeconomic status. The model was found to significantly improve English skills and compensate for academic setbacks.

Platforms like Khan Academy and Byju’s have launched a suite of AI models to personalise learning and improve educational outcomes.

  • Access to government services

Language and bureaucratic hurdles have been a significant challenge in citizens’ access to information on government services. One of the solutions developed for this problem is Microsoft and AI4Bharat’s generative AI-driven chatbot called Jugalbandi. The chatbot provides users access to this information in 10 Indian languages. The developers hope to expand the model to simplify interactions between institutions and individuals such as retrieving English-language court documents in regional languages or filling out applications through speaking.

  • Access to enhanced information discovery

The Indian Ministry of Culture launched the National Digital Library of India in 2019 to provide remote access to millions of e-books, e-journals, and other digital resources. The platform is equipped with AI-powered features that include content recommendations and intelligent search capabilities. Enhancing access to information discovery also requires linguistic diversity in technology development. The Indian government’s Mission Bhashini was launched in 2022 to build an Indian language tech ecosystem that enables multilingual access to the Internet and digital services.

The Indian government’s Mission Bhashini was launched in 2022 to build an Indian language tech ecosystem that enables multilingual access to the Internet and digital services.

Ethical challenges from development to deployment

Given the breadth of applications of AI across critical and sensitive sectors, these systems must be developed keeping in mind their potential harms. Wide-scale deployment requires being cognisant of the ethical challenges that these systems pose and addressing them through each stage of development and usage.

1. Algorithmic bias

One of the foremost ethical concerns in AI-driven information access is the potential for bias in AI algorithms. Underlying patterns in the data sets on which AI systems are trained may inadvertently replicate or even amplify biases. This can affect the kind of information displayed to users of personalised services and create an access differential. It was found that online search queries for African-American names were more likely to return ads to that person from a service that rendered arrest records, as compared to the ad results for white names. The same differential treatment occurred in the micro-targeting of higher-interest credit cards and other financial products when the computer inferred that the subjects were African Americans, irrespective of their financial background. Ethical AI development must focus on creating diverse and inclusive data sets that minimise these biases and ensure that AI systems provide balanced information to all users.

2. Privacy Concerns

AI-driven information access relies heavily on data, much of which can be personal. Applications such as healthcare chatbots are constantly fed sensitive information by users. The potential for misuse or unauthorised access to the personal information these models are trained on and collect is a critical issue. Users may be unaware of the extent to which their data is being collected, or may not fully understand the implications of sharing this data with AI systems. Moreover, the increasing sophistication of AI allows algorithms to infer personal information from seemingly innocuous data points. Safeguarding user privacy requires stringent data protection policies, transparent data collection practices, and robust security measures for storing data sets.

3. Accountability

The question of accountability is central to the ethical deployment of AI. When AI systems cause harm such as the spread of misinformation or inappropriate filtering, there must be clear lines of accountability. However, unlike in traditional media, it is often unclear who should be held responsible for harmful outputs in AI ecosystems. There is a tussle to avoid responsibility among the AI developers, the data providers, and the platform hosting the AI system. It is the responsibility of regulators to develop frameworks that clearly define who is accountable when AI systems fail to meet ethical standards.

4 Transparency and explainability

AI systems often function as “black boxes,” where the decision-making processes of the algorithm are opaque. This lack of transparency can lead to issues of trust, as users may not understand why certain information is being presented to them or how the AI system reaches its conclusions. Explainability is the ability to understand how an AI system arrives at a particular outcome. Ensuring that models are explainable allows for the development of trustworthy systems since it allows developers to assess vulnerabilities and verify outputs. Explainability is the first step to ensuring that AI, especially for public service applications, is ethical.

It is the responsibility of regulators to develop frameworks that clearly define who is accountable when AI systems fail to meet ethical standards.

Conclusion

AI-driven information access presents immense opportunities for improving how people find and consume information. However, these opportunities come with ethical challenges that must be addressed to ensure that AI systems truly align with the spirit of universal access to information. Inclusivity, transparency, privacy, and accountability must be at the centre of every stage of development and deployment to ensure that we create an equitable and reliable information ecosystem for all. It is only by prioritising ethical AI that can we realise its full promise as a tool for universal information access.


Amoha Basrur is a Research Assistant with the Centre for Security, Strategy, and Technology at the Observer Research

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.