Expert Speak Young Voices
Published on Sep 04, 2020
Debunking Disinformation : Can India learn from the West?

The menace of disinformation is not new. From the falsehoods that fueled the “Corona Jihad” controversy in early March to fake stories alleging China’s deliberate role in enabling the spread of Covid-19, these fabricated narratives manipulate sentiment, feed negative biases that form convictions and bolster actions of online hate speech.

Similar to the spread of the pandemic, instances of this 'infodemic' have increased exponentially this year. The two have become almost inter-dependent, with the former providing opportunities to incubate and create echo chambers for online falsehoods to flourish, and the latter creating new avenues for pandemic-related hoaxes, mistrust and a general sense of skepticism that seems to ensure that in this fight, the pandemic always has the upper hand. Notable Indian fact checkers have noted a surge in instances of misinformation and disinformation since the end of January 2020 - approximately from when the virus made its inroads in India. With Indians being most susceptible to fake news (as found by Microsoft’s Digital Civility Index of 2019) we've seen mis/disinformation take various shades - communal, xenophobic, distorting medical information, even infecting the voices of the State. The consequences of this have been disastrous, with increased sectarian friction to schisms in public trust and credibility.

Over-regulation by the State 

With no specific provision to regulate fake news but a barrage of ambiguous legislation that looks at a whole gamut of related phenomena, from defamation, sedition, promoting enmity between different groups to inciting public mischief in the Indian Penal Code and relevant provisions of the Information Technology (IT) Act, 2000, the Indian policy dispensation is not well-equipped to deal with this dynamic ecosystem, where new methods of disseminating and accentuating falsehoods are being created on a daily basis. Owing to deficits in technical and cognitive capacity, the Indian State is also infamously known to impose arbitrary blanket internet shutdowns, blocking websites in an effort to curb mis/disinformation which, while seeming to create a semblance of short-term relief, cause irreparable social, economic, and humanitarian harm in the long run.

The revised and awaited iteration of the Ministry of Electronics and Information Technology's (MeitY) Intermediary Guidelines Rules, which seek to endanger the safe harbour protection accorded to social media companies by enabling proactive monitoring of "unlawful content" and traceability on private, end-to-end encrypted platforms, have been widely touted as problematic. Various global digital rights groups and industry coalitions have criticised them on grounds of “over censoring, harming the fundamental right to privacy of Indian users and having grave unintended consequences on the security of the Internet", thereby potentially putting India's digital future at risk. Moreover, it has also been widely accepted that over-regulation by the State can endanger personal freedoms by having a “chilling effect” on expression and free speech, and it is worth noting that Section 66(A) of the IT Act, the only provision that sought to punish those who peddle disinformation, was deemed unconstitutional and struck down by the Supreme Court in the landmark Shreya Singhal judgement of 2015 on these grounds.

On the other side: Are platforms doing enough? 

Amidst government pressure to take down content, social media intermediaries have been continuously grappling with the problem of walking the tightrope between facilitating expression and moderating it. Dealing with the problem of scale, they have been reasonably unable to proactively detect and deal with the deluge of disinformation. Intermediaries tend to work in silos, with their own set of content guidelines which impedes information sharing and collaboration, retarding the industry's collective response to quell an instance of viral disinformation.

Additionally, the case of platforms like Facebook and Twitter facing allegations of harbouring ideological biases and subsequent acts of State scrutiny: while this may lead to some short-term reprieve of transparency and accountability, we must not ignore the unsaid pivot this has set in motion for platforms; from normative content moderation practices towards a more cautious approach that is driven by appeasement of elected officials and political parties rather than constitutional, equitable principles.

A possible path 

Amidst this gap in State ability, intermediary responsibility, lack of trust, and coordination among constituent groups, this issue clearly signals the need for a hybrid, agile and collaborative approach that is multi-stakeholder in nature. A healthy mix of involvement, ideas and implementation is imperative. In this regard, India could benefit from an industry-led self-regulatory framework, coordinated and administered by the State. A model that can be adopted here is the European Commission's approach to fighting online disinformation.

Acknowledging that disinformation efforts erode trust in institutions, undermine electoral systems and deepen societal tensions, the European Union (EU) has implemented a series of collaborative action plans, complemented by smart regulation, that puts intermediaries at the forefront of this fight. In 2018, in a global first, the EU initiated the Code of Practice on Disinformation - a voluntary, self-regulatory set of standards for tech companies and advertisers to check the spread of fake news. Addressing issues such as inauthentic users, fake accounts and bots, political advertising, and the disruption of advertising revenues of websites/pages known to spread disinformation, the Code directs social media signatories to employ technical and policy strategies to curb instances of falsehoods. The Code also seeks to empower users and the research community with tools and support to enhance capabilities in better understanding and detecting the drivers of disinformation. The Commission has set up a multi-stakeholder Sounding Board comprising representatives from civil society, media, consumer organisations and academia to voice their concerns.

Signatories such as Google, Facebook, Twitter, Mozilla and, most recently, TikTok (June 2020) are required to submit baseline reports that elucidate the measures they have taken to comply with the Code. In addition to this, signatories are also required to present annual self-assessment reports that gauge the Code's effectiveness. This, as the Commission notes, has provided an opportunity for "greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement the same".

Complementing these regulatory measures, the Commission has also established a Digital Media Observatory that seeks to build resilience against disinformation by creating a "European Hub" for fact-checkers and media organisations. This seeks to strengthen research capabilities, ensure privacy-protected access to platforms' data for academics, build a public knowledge portal, and provide support to public authorities in monitoring social media platforms’ efforts to fight fake news. Adapting to the deluge of falsehoods following the pandemic, the Commission also released a joint statement that sought to “resolutely counter disinformation” with timely, transparent communication, instituting a rapid alert system specifically for disinformation and subjecting platforms to stricter scrutiny through monthly reports. And the results show that Google’s YouTube reviewed over 100,000 pieces of misleading content and removed over 15,000 of them, Facebook’s suite of applications directed more than 2 billion users to credible sources of information like the World Health Organisation, thereby countering falsehoods with facts, taking down over 7 million pieces of Covid-19 related misinformation worldwide and Twitter reinforced information integrity through stronger tooling via Machine Learning, and challenged more than 3.4 million suspicious accounts targeting Covid-19 narratives.

To start with, India could certainly benefit from adopting such a multi-stakeholder framework, where the IT Ministry could lead by playing a constructive role in coordinating, facilitating and monitoring the process. As signatories, social media intermediaries, industry forums, and advertisers would agree to comply with a voluntary code that promotes self-regulation, but with transparency. A similar model was implemented just before the Lok Sabha elections of 2019, when the Internet and Mobile Association of India (IAMAI), an industry association representing Google, Facebook, Twitter, ByteDance/ TikTok and ShareChat, among others, had drafted the Voluntary Code of Ethics in consultation with the Election Commission. Largely successful in facilitating a collaborative mechanism between the State and the tech industry to ensure the integrity of the electoral process, this model is being observed for future Central and State elections as well.

However, it must be noted that the ever-dynamic fight against mis/disinformation will not benefit from a one-time, one-size-fits-all solution. There is a constant need for iteration and revision of policy, which will be practically possible when a mechanism such as this is put into place as a stepping stone or a base framework. This would do the much-needed job of filling up the present regulatory lacunae and forge a path towards collaborative consensus.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Siddhant Chatterjee

Siddhant Chatterjee

Siddhant Chatterjee is a Policy Consultant. He's previously worked with the British and Australian Governments and his interests lie in AI Ethics and Platform Governance.

Read More +