Author : Anulekha Nandi

Expert Speak Digital Frontiers
Published on Sep 27, 2024

Generative AI reduces the cost of information generation and transmission, leading to information overload and manipulation, undermining the civic participation potential outlined in SDG mandates for access to information

The age of AI and access to information paradox

This article is a part of the essay series: “The Freedom to Know: International Day for Universal Access to Information 2024”


Access to information has been recognised as a key element of sustainable development goals (SDGs) since the adoption of the Rio Declaration in 1992. Since then, it has formed the centre-piece of international development initiatives finding a place within the SDG 2030 agenda in 2015 to promote participatory governance and strong institutions under SDG 16.10. The Human Rights Council in its 2020 resolution on freedom of opinion and expression also made it incumbent upon public institutions to make information publicly available. Access to information has been at the heart of earlier development efforts that aimed to use information and communication technologies like radios to provide underserved communities with relevant information about economic opportunities, development projects, and best practices to improve their living conditions.

However, the ideals of access to information have come to be ringfenced by technological and social development which have reduced the cost of development and innovation leading to information overload and manipulation. People are now faced with a deluge of information and synthetic content. Generative AI has exacerbated the potential of social media which on many occasions has acted as a fulcrum for violence in many parts of the world. Generative AI’s ability to produce information and media at scale combined with social media’s transmission capabilities has raised a new sceptre of cognitive overload and bias. One of the ways the human mind deals with information overload is to engage with it selectively by picking out ones that resonate with pre-existing belief systems.

The double-edged crisis of synthetic content and social media has sought to produce newer and arguably more pervasive dimensions of risks and harms.

Just as latent bias and prejudice creep into AI systems, their output feeds the bias in people’s internalised judgements, gathering momentum in societal shifts summed up in concerns around increasing polarisation in societies. The modality of access to information that was supposed to make people participate in public discourse and economic activity has resulted in often undesirable and violent manifestations of such potential. It has been implicated in swaying mass opinions, and election results, and producing false and humiliating versions of people through deepfakes. The double-edged crisis of synthetic content and social media has sought to produce newer and arguably more pervasive dimensions of risks and harms.

Cost of information production and transmission

In earlier times, access to information was contingent upon the mode of transmission i.e. radios or television where many early development interventions considered such transmission devices as catalysts for development outcomes. With digital and internet penetration and easy accessibility of generative AI platforms, the costs of both production and dissemination of information have come down. Moreover, the data now created is likely to persist over time over time due to the large shifts in data supply infrastructure erected by big tech companies building on earlier rounds of digitalisation. This heightens the chances of such data being used by algorithms to enhance their predictive capacity as much as the live data being ingested by generative AI models. The lower cost of content generation complemented by social media’s ability for transmission and virality has heightened the risk of malicious campaigns with it being recognised as the most significant short-term risk facing the world today.

With digital and internet penetration and easy accessibility of generative AI platforms, the costs of both production and dissemination of information have come down.

Apart from content generation, AI capabilities like social media bots can augment the transmission capability of social media through amplification and virality of content. This creates additional complications as current AI models were created on content scraped from the web whereas future rounds of AI development may be trained on AI-generated synthetic content thereby destabilising the foundation of information and truth and their ability to inform public debate. Using synthetic data may increase the chances of bias propagation and may lead to an increase in model error. The digital life of data points is arguably infinite which means, this characteristic of persistence translates to data being used in multiple different and uncertain ways an infinite number of times across different contexts. This highlights the paradox between the SDG aims of access to information for public participation and socio-economic development and the information overload caused by the combined effect of generative AI and social media that often threatens to destabilise the social fabric across communities.

Way forward

Information manipulation, dissemination, and amplification occur through a combination of influencers, algorithms, and the crowd. The algorithm is instrumental in generating visibility for the content through recommendation engines. Therefore, the influencer curates the content that will leverage particular algorithmic capabilities to reach their audience. The Coalition for Content Provenance and Authenticity (C2PA) is a joint development project based on an alliance between Adobe, Arm, Intel, Microsoft and Truepic. It aims to develop open technical standards to certify the sources, history, and provenance of media content. Moreover, regulations in jurisdictions like India and the European Union require deployers to label the AI-generated so that users can make informed decisions on the veracity of the content. These important developments highlight the potential for managing risks when the responsibility lies with specific developers and deployers of given technologies. However, the rapid pace at which malicious synthetic media content travels leads to bias in algorithms to interface with bias in society. While media and information literacy initiatives have been put forward as an important strategy by multilateral organisations, it needs to contend with the evolving sophistication of AI technologies and their ability to produce false and manipulated information at scale.


Anulekha Nandi is a Fellow at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Anulekha Nandi

Anulekha Nandi

Dr. Anulekha Nandi is a Fellow in Technology, Economy, and Society at ORF. Her primary area of research includes digital innovation management and governance focusing ...

Read More +