Author : Anirban Sarma

Expert Speak Digital Frontiers
Published on Sep 27, 2024

Globally, most people access news and information online today, but the trend has a dark underside that we must act together to address.

Accessing news and information online: A Pandora’s box?

This article is a part of the essay series: “The Freedom to Know: International Day for Universal Access to Information 2024


In September 2024, for the first time, online platforms led by social media overtook television as the most popular source of news among adult consumers in the United Kingdom. 71 percent of the UK’s adults now routinely turn to the Internet and their smartphones for news.

The development came as no surprise. Globally, conventional news media have been losing market share to their virtual counterparts and rivals for years, marking a generational shift in the way people access and engage with information and news. Today, over 70 percent of all Indians rely on online news for instance, with 49 percent using social media as their primary news source. As of 2023, 86 percent of US adults were often or at least sometimes getting their news from a smartphone, tablet or computer. Similarly, statistics show that across the European Union, over 75 percent of people in the age bracket of 25–54 years are more likely to read or watch news online than offline. 

A dark underside 

The ease of accessing news on one’s personal device cannot be beaten. There is also little doubt that Net-enabled smartphones have done more to advance instant and universal access to information than any other technological innovation in history. However, the production and mass online circulation of certain kinds of information is beginning to have an incendiary effect on digital societies and free speech debates.

For online media, the pressure to break news “first and fast” often compromises the accuracy of reportage, and may be contributing to a gradual erosion of media ethics. Unverified, hastily published information can be fatal. During the early months of the pandemic in 2020, hundreds of lives were lost when many—following false web reports—drank methanol and alcohol-based cleaning fluids, believing them to be a cure for the coronavirus. Four years later, the threat of misinformation has only increased. As an AI expert and University of Washington academic said ahead of the ongoing US presidential campaigns, “I expect a tsunami of misinformation … the ingredients are there and I am completely terrified.”

The production and mass online circulation of certain kinds of information is beginning to have an incendiary effect on digital societies and free speech debates.

However, it is the malaise of online disinformation—fake news and false or distorted information spread with a deliberate intent to deceive and sow discord—that can have especially damaging, long-term consequences. The World Economic Forum describes disinformation as a “growing crisis”; and a multi-country survey in 2021 showed that 80 percent of the 125,000-plus respondents believed disinformation was a serious problem, and almost one-third of them said they had been victims of fake news. What makes the crisis especially severe is that disinformation is inextricably bound to the rise of online hate speech, radicalisation, polarisation on social media platforms, and the undermining of democratic processes.

In Myanmar for example, the proliferation of anti-Rohingya fake news and hate speech on Facebook played a “determining role” in the events leading to the massacre of Rohingya Muslims in the northern state of Rakhine in August 2017. For months, Facebook had been rife with rabid posts that attacked the Rohingyas’ religion and origins and fabricated stories about Muslim brutality against Buddhists, and mosques in Yangon hoarding weapons as part of a plan to blow up Buddhist sacred sites. Even the Tatmadaw, Myanmar’s armed forces, took to Facebook to circulate anti-Rohingya propaganda and mobilise support for a campaign of violence targeting the community. Facebook has been roundly condemned for its part in the genocide, and in late 2021, groups of Rohingya refugees in the United States (US) and the United Kingdom (UK) sued the company for US$ 150 billion for permitting hate speech about them to spread.

Twitter and Facebook have long been battlegrounds for political polarisation, clustering consumers of news and information into echo chambers that reinforce their political ideologies. As Facebook’s data scientists found in 2015, only one-fourth of the content that Republicans post on Facebook is seen by Democrats, and vice versa. Twitter is no different: over three-fourths of Twitter users in the US who retweet and share political messages are from the same party as the message’s author.[1] Social media algorithms easily understand users’ political leanings, targeting their feeds with content that aligns with their pre-existing opinions, hardening their ideological stance in the process, and ultimately stoking antipathy towards “others” with different convictions.[2] The resulting minimisation of engagement with alternate perspectives has led to a shrinking of the digital public sphere, even as the number of active users on social media grows every month.

Twitter and Facebook have long been battlegrounds for political polarisation, clustering consumers of news and information into echo chambers that reinforce their political ideologies.

Information and “news” consumed online have also had a demonstrated impact on the rise of radicalisation, leading to cases of out-and-out terrorism, or of “grievance-based violence”, and the broad grey area between them that is populated by innumerable cases of “lone actor” violence. The shooter in the 2019 Christchurch mosque killings in New Zealand, for instance, admitted he had been radicalised by the online videos and speeches of far-right personalities. His attack on Muslims attending Friday prayers at two local mosques was motivated by a mix of anti-immigrant and white supremacist sentiment, which he had acquired over time chiefly from news portals hosting extremist content, and from like-minded peers on social media.

Finally, the advent of AI has opened multiple new avenues for creating and disseminating disinformation. Deepfakes are among the most recent and sophisticated of these disinformation types, and they illustrate the massive – and profoundly disturbing – strides made in “intelligent” audio-visual content generation. In early 2024, robocalls were made to a large population of American voters in the state of New Hampshire, who then found themselves listening to an audio deepfake of President Biden exhorting them not to vote in the state’s primary election. Generative AI’s ability to produce fake but credible-sounding personalised emails has made it an asset for cybercriminals executing phishing attacks and other social engineering projects.

Addressing harms and risks 

Attempts to regulate harmful content online – whether news or other kinds of information – have proven to be tricky as they often collide head-on with free speech laws or broader principles of freedom of expression. What complicates these issues further is that they are understood and dealt with quite differently in different countries.

China’s repressive measures around news and information—especially when they don’t conform to state positions—lie at the other end of the spectrum.

The US, for example, is more evangelical about free speech than most nations: the First Amendment decrees that the government cannot restrict what citizens say online, except with respect to fixed categories such as hate speech, or speech that incites violence or is defamatory. China’s repressive measures around news and information—especially when they don’t conform to state positions—lie at the other end of the spectrum. The Indian approach—that identifies certain scenarios in which “reasonable restrictions” may be imposed on speech and expression—lies in between, and clearly nearer the American end. Operating within these kinds of legal frameworks, the following steps could mitigate the risks associated with accessing and consuming news and information online:

  • Broaden the scope of intermediary liability: Given the vast volume of information and news exchanged across online intermediaries, the latter must be held more accountable for the content they host. Social media giants and other online service providers cannot be allowed to exploit “safe harbour” clauses to their advantage, claim that they are not publishers of content, and thus absolve themselves of all responsibility to build a safer, more inclusive cyberspace. The content moderators and other human and technological resources these platforms deploy must work towards the removal of offensive content more proactively and consistently, without necessarily waiting to be notified by governments and users first.
  • Upscale fact-checking efforts and initiatives: A stronger culture of fact-checking and verifying information authenticity needs to be inculcated across the digital media ecosystem. Training and capacity-building efforts must begin by upgrading existing courses in media ethics and journalistic practice and extending them into professional and organisational spheres. Additionally, codes of conduct and standard operating procedures at digital media outlets should require content to be fact-checked internally or by third-party fact-checkers before being published online.
  • Strengthen the implementation of anti-disinformation mechanisms: Several well-crafted anti-disinformation plans and regulatory mechanisms have been devised around the world. Prominent among these are the EU Action Plan against Disinformation (2018) which seeks to strengthen member states’ capacities and joint responses, mobilise tech companies, and build societal resilience to disinformation; and the more recently introduced Code of Practice on Disinformation (2022) that aims to prevent profiting from disinformation on digital platforms and curbing the spread of new tools such as bots and deepfakes. In India, the much-discussed Information Technology (Intermediary Guidelines and Digital Media Ethics) Code (2021) encourages online intermediaries to disable the sharing of information found to be false, fake, or misleading. The implementation of mechanisms like these should be continually monitored and strengthened, and opportunities for adopting them in other geographies explored.
  • Build an ecosystem of local-language digital data, information, and content: Finally, with digital platforms increasingly relying on AI to detect harmful content, AI itself must be better adapted to local languages. Local-language-only AI tools also need to be built. The paucity of training data available in these languages currently presents a serious challenge, and building an ecosystem of content and data in these languages will be a large, complex, and long-term undertaking. If successfully accomplished, however, it could benefit the cause of universal access to information in countless ways, one of which will be to support the development of AI services that act as a safeguard against disinformation.

Anirban Sarma is Deputy Director, ORF Kolkata and Senior Fellow, Centre for New Economic Diplomacy 

[1] Chris Bail, Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing (Princeton, NJ: Princeton University Press, 2021)

[2] Sinan Aral, The Hype Machine: How Social Media Disrupts Our Elections, Our Economy and Our Health – and How We Must Adapt (New York: Currency, 2020)

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.