Expert Speak Raisina Debates
Published on Apr 16, 2025
Condemned to Be Free: Balancing Free Speech and Security Online

Image Source: Getty

This article is part of the series — Raisina Files 2025


In 2019, the United Kingdom’s (UK) Digital, Culture, Media and Sport Committee released its final report[1] in a longer enquiry into disinformation and ‘fake news’, training its focus on the Facebook and Cambridge Analytica scandal. The report foregrounded the growing concerns about the opaque business models that shape the algorithmic curation of targeted political advertising and disinformation campaigns, often by foreign countries. The 18-month enquiry highlighted how the dominant market position enjoyed by these companies enables them to pursue business models that subvert local laws and user protections. At the same time, the combination of economics and algorithms that enables surreptitious targeting also creates echo chambers and filter bubbles in online spaces that prime vulnerable individuals for radicalisation, or allow the escalation of dangerous speech[a] into offline violence that results in threats to human life and national security.[2]

To be sure, before social media platforms showed their colours as enablers of harm, much of the world heralded rapid digitalisation, the evolution of information and communications technologies, and the emergence of social media as “epochal” transformations. They were seen as a manifestation of the ‘public sphere’ ideal: spaces where debate, dialogue, deliberation and democracy would flourish. Today, social media still remains effective channels for democratised access to civic and political participation.

Therein lies the perpetual dilemma in the governance of social media platforms.

In contrast to politically centralised countries like China,[3] democratic states tend to ensure freedom of speech and expression as a fundamental human right for its citizens, albeit with reasonable restrictions. These restrictions apply even in the United States (US), for instance,[4] a country with perhaps the strongest protections for free speech as well as under international law in the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights.[5] “Reasonable restrictions” typically encompass unlawful and harmful activities, incitement to violence, threats to sovereignty and national security, and disruptions of public order.[6]

In 2016, social media platforms were instrumentalised in enabling foreign interference in the US election.[b],[7] In the years prior, however, these platforms were already being implicated in harmful activities, including information manipulation in war zones, manipulation of public sentiment by organised group of actors, dissemination of child sexual abuse content, harassment of marginalised communities, radicalisation, dissemination of terrorist content, and inciting offline violence.[8] Consequently, over the years, a spectrum of safety and security risks—to individuals and states—have led to reasonable restrictions becoming the norm.

The task of minimising the safety and security risks of social media is not easy. After all, these companies’ architecture of technology and business converge on the data and attention economy.[9] Social media companies have significant control over both, the millions of consumers whose attention they seek to sell to advertising companies, as well as the publishing side of the market. This is reinforced by sophisticated advertising infrastructures designed to uniquely target consumers armed with staggering wealth and minutiae of information about both these sides of the market in their position as an intermediary.[10] Social media companies are entrenched in opaque marketing techniques subtended by data infrastructures and insights generated from granular behavioural data from connected media like apps and plugins.[11]

Indeed, the issues with regulating Big Tech are the same as those that emerged in corporate governance with the spread of globalisation and transnational firms.[12] Given their market dominance and globally sprawling operations, ensuring compliance with rights standards and regulations is fraught with challenges. This is because rights protections and enforceability tend to fall under the purview of not one, but various jurisdictions; it is a challenge to regulate firm behaviour across such jurisdictions embedded as they are in the political and legal circumstances of the country of their origin.[13]

Reasonable Restrictions as the Norm

Online platforms have become so deeply intertwined in citizens’ civic and political participation that it could be said that our collective lives are being curated and defined by algorithms. If, at its nascent stage, social media was trumpeted as the ideal public square, it later revealed its dark underside: as potent tools of bullying and harassment, amplification of misinformation, disruption of public order, and foreign interference; and then, as an instrument for harmful behaviour, stoking internal instability and disharmony and grey-zone warfare. Thus, over time, the reasonable limitations to free expression became the norm for governing and regulating this space. Safety and security assumed primacy, at least in theory, to enable civic and political participation; sometimes, however, inadvertently, and at other times, wilfully, squelching such modes of democratic engagement.

Security threats that impinge on freedom of expression can broadly be categorised into three dimensions: These include threats to personal safety and internal security, and the threat of foreign interference. A 2020 research found that in the United Kingdom (UK), 62 percent of adults and an alarming 81 percent of 12–15-year-olds have had at least one harmful experience on social media in the 12 months prior.[14] Personal safety can be compromised through bullying, sexual grooming, harassment, and identity-driven attacks based on gender or religion. The harms also encompass incitement to suicide, hate speech and algorithmic discrimination, sexual extortion, invasion of privacy, frauds and scams, misinformation and disinformation, phishing and catfishing, cyberstalking, and smear campaigns.[15] Online misinformation and extremist campaigns have the potential to translate into offline violence shaping mass behaviour.[16]

Whichever way harmful content is created, their dissemination and consumption is notoriously difficult to trace, intertwined as they are with opaque algorithmic architectures and the logic of business practices predicated on an attention economy. Algorithms trained on extensive and granular personal and behavioural data often lead to the bypassing of rational reflection.[17] The mode of recommending content based on consumption patterns often resembles predatory advertising practices. These targeted content recommendations are important for user retention and network effects to ensure the exploitation of economic value that users represent. These characteristics have been weaponised to disrupt public order in the form of dangerous speech and translation to offline violence as well as foreign manipulation and interference through large-scale misinformation campaigns.

At the heart of dangerous speech escalation and foreign interference lies the algorithm’s ability to influence user behaviour. Exploiting, entrenching, and amplifying cognitive biases lead to mass behaviour change operations.[18] Compounding the challenge is not just accountability of platforms but the difficulty to trace origination. Content policy and regulatory development have often deliberated on the limits of safe harbour under intermediary liabilities. However, transferring blanket censorship powers to platforms would lead them to err on the side of caution so as to not run afoul of the authorities.[19]

This highlights the difficulty of establishing effective regulations—legal, political, and economic relationships get implicated within the balance of power between companies, users, and governments.[20] Mandating automated censorship would mean transferring censorship rights to a private entity; and state regulation would be cumbersome and potentially overreaching, creating a China-like situation of state control.[21] The imperative is to develop a multipronged strategy to navigate the competing concerns that arise in this domain.

Content Policies: A Double-Edged Sword

Broadly, national regulatory approaches to online speech may be thought to lie along a sliding scale: the liberal US approach with ‘reasonable restrictions’ is at one extreme, while China’s ‘censor-and-expunge’ approach is at the other; most other countries lie somewhere in between. Beyond these frameworks, the closest thing to online law are the content policies and community guidelines of social media platforms themselves, which set down rules of conduct that their moderators seek to enforce.

Platforms do try to promote respectful and civil interaction by establishing rules against harmful speech and toxic behaviour, and by reserving the right to take down offending content. In effect, however, content guidelines and moderators often end up stifling speech on platforms and exercising an omniscient power over personal expression. Besides, the unilateral decisions sometimes taken by Big Tech firms to deplatform or suppress content and users—which could be at odds with national laws supporting free speech, to begin with—can be deeply problematic.

In 2016, for instance, Facebook was roundly criticised for censoring a post bearing the image of the Pulitzer Prize-winning ‘napalm girl’ photo from the Vietnam war, which shows nine-year-old Kim Phuc, crying and running naked down the road during a napalm attack. Following widespread fury, the company reinstated the image, explaining that a picture of a naked child would normally violate its community standards, but that in this case, it understood that the “value of permitting sharing [the photo] outweighs the value of protecting the community by removal.”[22] Some years later, Twitter found itself in the eye of a storm in 2022 after internal documents revealed that the company had suppressed media articles about then US President Joe Biden’s son Hunter’s business activities in Ukraine before the 2020 US elections. The exposé also revealed a variety of content restrictions and blacklisting techniques Twitter was using to censor posts on its platform.[23]

Algorithms are routinely pressed into the service of content policies, compounding their double-edged effect. Facebook, Twitter, and YouTube all use algorithms to moderate content, and these codes determine which expression is either permissible, to be amplified, or muzzled. On average, 3.7 million new videos are uploaded to YouTube every day,[24] but the algorithms used to assess them are opaque. YouTube claims that its AI applications allow it to identify and remove 80 percent of offending videos before they are routed to a human moderator.[25] However, content creators are often left bewildered about the reasons for removal, and are frustrated by the lack of transparency surrounding YouTube’s process for appealing against takedowns. Moreover, takedowns result in ‘strikes’ against a creator’s channel, leading to temporary upload restrictions; enough strikes could eventually get a channel permanently disabled.[26]

The issue of content moderation has proven surprisingly difficult to manage. Without a doubt, the political and cultural biases of social media platforms and their pursuit of advertising revenue shape their treatment of different kinds of content. However, the fallibility of underpaid and overworked human content moderators,[27] and far-from-foolproof AI tools also play a role. As online hate speech against Myanmar’s Rohingya Muslims escalated in the two years leading up to the 2017 genocide against the community, it was discovered that Facebook employed only two Burmese-speaking moderators in that period.[28] Additionally, platforms’ growing reliance on AI to detect harmful content is flawed as AI is often poorly adapted to local languages.

Towards Safe and Secure Online Public Spaces

Given that most forms of harmful expression, including hate speech, proliferate primarily on social media, the responsibility for curbing them ought to be assumed in large part by social media platforms. Across geographies, however, it is widely agreed that platforms are doing little in this regard.

There is enough evidence to suggest, for example, that hate speech has increased on Twitter globally since its acquisition by Elon Musk.[29],[30] Facebook has a long history of applying its content policies inconsistently, or in markedly self-serving ways. In Australia, for example, Big Tech platforms have tended to be callously negligent of the volumes of child sexual abuse material (CSAM) in circulation, and also of the thousands of links to CSAM sites being distributed via social media’s direct messaging services.[31]

When allegations of harmful speech are raised, platforms often defend themselves by arguing that they are mere intermediaries between content creators and consumers, and not publishers themselves, and therefore cannot be held accountable for posts. This may be tenable up to a point. Tech laws in several countries, such as India’s Information Technology (IT) Act, include a ‘safe harbour’ clause that says “an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by it.”[32] There is a caveat though: the safe harbour will not be granted if an intermediary “fails to expeditiously remove” a piece of content even after the government flags that it is being used for unlawful purposes.[33]

A critical mechanism for enforcing accountability from social media platforms is provided by the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules of 2021 that were formulated under the aegis of the IT Act. The Rules identify eleven types of content that intermediaries cannot publish or transmit. These include information that is “obscene, pornographic, paedophilic, invasive of another’s privacy”, threatens national security, or is “patently false and untrue or misleading in nature.”[34] Not only do the Rules make it mandatory for intermediaries to take down offensive content of this kind when alerted, but they insist that intermediaries must use technology to pre-screen it and remove it pre-emptively. The message is clear—intermediaries can no longer profess ignorance or indifference about what they host. As several cases in Indian courts have demonstrated since, the judiciary has thrown its weight behind the Rules, and Big Tech is feeling some heat.[35]

Other countries too, are finding ways to push back against the transgressions of social media and are advancing online safety. Australia, for instance, is systematically enforcing more ethical and responsible behaviour from online service providers (OSPs).[36] A focus of the country’s Online Safety Act, in effect since June 2021, is to make Big Tech and other OSPs more accountable for safety by laying down a set of ‘Basic Online Safety Expectations’ that compel service providers to tackle CSAM and other virtual harms more actively. The Act makes it compulsory for tech players to develop codes to detect and remove CSAM, failing which eSafety—Australia’s regulatory body for online safety—can impose industry-wide standards for the purpose.[37] With eSafety prioritising the investigation of complaints about CSAM, OSPs with Australian end users are under pressure to align their safety procedures with the Act’s requirements.

The Indian and Australian initiatives are useful models, and similar approaches are being implemented in other parts of the world. Yet, balancing free speech and expression with security and safety is also about weighing competing interests, which is problematic. France’s experience with the Avia Law illustrates the dilemma. In 2019, a French parliamentarian introduced a bill—popularly called the Avia Law—to regulate hate speech on social media. It called for platforms to remove hate speech within twenty-four hours of a notice or complaint being received.[38] Opposed furiously by free speech activists, the bill was watered down, and the French Constitutional Court eventually struck down the core provision about content removal within twenty-four hours, calling it “a breach of the right to freedom of expression and opinion.”[39] The final version of the Avia Law that entered into force in 2020 was a mere shadow of the law it could have been.

The Road Ahead

In 2023, UN Secretary-General António Guterres warned, “The spread of hatred and lies online is causing grave harm to our world. Misinformation, disinformation and hate speech are fuelling prejudice and violence; exacerbating divisions and conflicts; demonizing minorities; and compromising the integrity of elections.”[40] The urgency of finding an optimal balance between free speech and safety and security, therefore, is growing exponentially. As countries explore a variety of possible solutions—from stringent laws to heightened selfregulation by tech companies—the following measures could prove useful.

  • Broadening the scope of intermediary liability: The scope of “intermediary liability” needs to be broadened, and social media platforms and other intermediaries ought to be held squarely accountable for the content they host. The human and technological resources deployed by these platforms should work towards removing harmful content more proactively.
  • Building a stronger culture of fact-checking and information verification: A robust culture of fact-checking, verification, and validation must be instilled across national digital media ecosystems. Instituting related capacity-building programmes, codes of conduct, and standard operating procedures at digital media outlets could be valuable.
  • Identifying governance levers: Governance mechanisms span voluntary non-binding initiatives, community guidelines, and laws and regulations such as those on intermediary liabilities. There is a need to identify and align appropriate governance levers with relevant stakeholders and institutional mechanisms to develop effective policy and regulatory pathways.
  • Promoting pathways for citizen participation: The development of platform governance strategies needs to promote and include active citizen participation to ensure that such measures are working in favour of the public. In addition to the participatory process, this requires legitimation processes to ensure that these initiatives are codified within operational and regulatory practices.
  •  Developing definitional consensus and standards on online risks: A critical bottleneck impeding effective governance is the lack of definitional standards and consensus on risks online. This highlights the need to develop common frameworks of reference for effective regulation of transnational social media companies.
  •  Strengthening AI-driven detection systems for harmful content: In the longer term, every effort must be made to advance AI-based applications for detecting harmful content in local languages. This is a large-scale, multi-stakeholder undertaking, and will involve building a sprawling corpus of local-language content and training data. If done right, though, it could pay rich dividends in the future.

Endnotes

[a] The term ‘dangerous speech’, as distinct from hate speech, was coined by American journalist, Susan Benesch, as forms of expression that increase the risk of people condoning or participating in offline violence. See: https://www.dangerousspeech.org/dangerous-speech

[1b] According to a declassified document released by the Unites States’ Office of the Director of National Intelligence, Russia used a multi-faceted influence campaign on social media combining covert intelligence operations with overt efforts by Russia-backed state and non-state actors. (See: https://www.dni.gov/files/documents/ICA_2017_01.pdf). This also included attempts to infiltrate voting infrastructure and influence public opinion and promote discord through social media (See: https://daviscenter.fas.harvard.edu/insights/why-do-we-talk-so-much-about-foreigninterference)

[1] UK House of Commons, Digital, Culture, Media and Sports Committee Report, Disinformation and ‘Fake News’: Final Report, 2019, https://committees.parliament.uk/committee/378/digital-culture-media-and-sport-committee/ news/103668/fake-news-report-published-17-19/

[2] Swathi Meenakshi Sadagopan, “Feedback Loops and Echo Chambers: How Algorithms Amplify Viewpoints,” The Conversation, February 4, 2019, https://theconversation.com/feedback-loops-and-echo-chambers-how-algorithms-amplifyviewpoints- 107935; Jonathan Stray, Ravi Iyer, and Helena Puig Larrauri, The Algorithmic Management of Polarization and Violence on Social Media, Knight First Amendment Institute at Columbia University, 2023, https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-onsocial- media; Allison J.B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt, “How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility” (paper presented at RecSys ’18: Proceedings of the 12th ACM Conference on Recommender System, 2018).

[3] David Bandurski, “Freedom of Speech,” Decoding China, https://decodingchina.eu/freedom-of-speech/

[4] Congressional Research Service, The First Amendment: Categories of Speech, United States, 2024, https://sgp.fas.org/crs/misc/IF11072.pdf

[5] OHCHR, “Freedom of Opinion and Expression - Factsheet,” Office of the High Commissioner for Human Rights, https://www.ohchr.org/sites/default/files/Documents/Issues/Expression/Factsheet_1.pdf

[6] Gehan Gunatilleke, “Justifying Limitations on the Freedom of Expression,” Human Rights Review 22 (2021), https://doi.org/10.1007/s12142-020-00608-8; Congressional Research Service, The First Amendment: Categories of Speech; “Article 19 in Constitution of India,” Indian Kanoon, https://indiankanoon.org/doc/1218090/

[7] “Why Do We Talk So Much about Foreign Interference,” Harvard University Davis Center for Russian and Eurasian Studies, April 17, 2021, https://daviscenter.fas.harvard.edu/insights/why-do-we-talk-so-much-about-foreign-interference

[8] Chaney, Stewart, and Engelhardt, “The Algorithmic Management of Polarization and Violence on Social Media”; Guri Nordtorp Mølmen and Jacob Aasland Ravndal, “Mechanisms of Online Radicalisation: How the Internet Affects the Radicalisation of Extreme-Right Lone Actor Terrorists,” Behavioral Sciences of Terrorism and Political Aggression 15, no. 4 (2021): 463–87, https://doi.org/10.1080/19434472.2021.1993302; Philip Baugut and Katharina Neumann, “Online Propaganda Use during Islamist Radicalization,” Information, Communication & Society 23, no. 11 (2019): 1570–92, https://doi.org/10.1080/1369118X.2019.1594333; Sana Ali, Hiba Abou Haykal, and Enaam Youssef Mohammed Youssef, “Child Sexual Abuse and the Internet—A Systematic Review,” Human Arenas 6 (2023): 404–421, https://doi.org/10.1007/s42087-021-00228-9

[9] Filippo Menczer and Thomas Hills, “The Attention Economy,” Scientific American, 2020, https://warwick.ac.uk/fac/sci/psych/people/thills/thills/2020menczerhills2020.pdf

[10] Fernando N van der Vlist and Anne Helmond, “How Partners Mediate Platform Power: Mapping Business and Data Partnerships in the Social Media Ecosystem,” Big Data and Society 8, no. 1 (2021), https://journals.sagepub.com/doi/10.1177/20539517211025061

[11] Anne Helmond, David B. Nieborg, and Fernando N. van der Vlist, “Facebook’s Evolution: Development of a Platform-as-Infrastructure,” Internet Histories 3, no. 2 (2019), https://www.tandfonline.com/doi/full/10.1080/24701475.2019.1593667#d1e238

[12] Gorwa, “The Platform Governance Triangle”

[13] Gorwa, “The Platform Governance Triangle”

[14] Douglas Broom, “How Can We Prevent Online Harm Without a Common Language for It? These 6 Definitions Will Help Make the Internet Safer,” World Economic Forum, September 1, 2023, https:// www.weforum.org/stories/2023/09/definitions-online-harm-internet-safer/; Ofcom, Internet Users’ Experience of Potential Online Harms: Summary of Survey Research, Information Commissioner’s Office, 2020, https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/online-research/ online-harms/2020/concerns-and-experiences-online-harms-2020-chart-pack.pdf?v=324902

[15] Broom, “How Can We Prevent Online Harm Without a Common Language for It?”

[16] Daniel Karell, “Online Extremism and Offline Harm,” Social Science Research Council, June 1, 2021, https://items.ssrc.org/extremism-online/online-extremism-and-offline-harm/

[17] Stephanie Kulke, “Social Media Algorithms Exploit How We Learn from Our Peers,” Northwestern Now, August 3, 2023, https://news.northwestern.edu/stories/2023/08/social-media-algorithms-exploit-how-humanslearn- from-their-peers/

[18] Tzu-Chieh Hung and Tzu-Wei Hung, “How China’s Cognitive Warfare Works: A Frontline Perspective of Taiwan’s Anti-Disinformation Wars,” Journal of Global Security Studies 7, no. 4 (2022), https://doi.org/10.1093/jogss/ogac016.

[19] Christoph Schmon and Haley Pederson, “Platform Liability Trends Around the Globe: From Safe Harbors to Increased Responsibility,” Electronic Frontier Foundation, May 19, 2022, https://www.eff.org/deeplinks/2022/05/platform-liability-trends-around-globe-safe-harborsincreased- responsibility

[20] Robert Gorwa, “What is Platform Governance?,” Information, Communication & Society 22, no. 6 (2019): 854–871, https://doi.org/10.1080/1369118X.2019.1573914.

[21] Schmon and Pederson, “Platform Liability Trends Around the Globe”

[22] Sam Levin, Julia Carrie Wong, and Luke Harding, “Facebook Backs Down from ‘Napalm Girl’ Censorship and Reinstates Photo,” The Guardian, September 9, 2016, https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo

[23] Aimee Picchi, “Twitter Files: What They Are and Why They Matter,” CBS News, December 14, 2022, https://www.cbsnews.com/news/twitter-files-matt-taibbi-bari-weiss-michael-shellenberger-elonmusk/

[24] “YouTube Statistics 2023: How Many Videos Are Uploaded to YouTube Every Day?,” Wyzowl, 2023, https://www.wyzowl.com/youtube-stats/#:~=

[25] “The Role of AI in Content Moderation and Censorship,” Aicontentfy, November 6, 2023, https://aicontentfy.com/en/blog/role-of-ai-in-content-moderation-and-censorship#:~=

[26] “Content Moderation Case Study: YouTube Doubles Down on Questionable Graphic Content Enforcement Before Reversing Course,” TechDirt, February 16, 2022, https://www.techdirt.com/2022/02/16/content-moderation-case-study-youtube-doubles-downquestionable- graphic-content-enforcement-before-reversing-course-2020/

[27] Chris James and Mike Pappas, “The Importance of Mental Health for Content Moderators,” Family Online Safety Institute, May 9, 2023, https://www.fosi.org/good-digital-parenting/the-importance-of-mentalhealth- for-content-moderators

[28] Poppy McPherson, “Facebook Says it Was ‘Too Slow’ to Fight Hate Speech in Myanmar,” Reuters, August 16, 2018, https://www.reuters.com/article/world/facebook-says-it-was-too-slow-to-fight-hate-speech-inmyanmar- idUSKBN1L1066/

[29] Kara Manke, “Study Finds Persistent Spike in Hate Speech on X,” UC Berkeley News, February 13, 2025, https://news.berkeley.edu/2025/02/13/study-finds-persistent-spike-in-hate-speech-on-x/#:~:text=

[30] Mike Wendling, “Twitter and Hate Speech: What’s the Evidence?,” BBC News, April 13, 2023, https://www.bbc.com/news/world-us-canada-65246394

[31] eSafety Commissioner, “Basic Online Safety Expectations: Non-Periodic Notices Issued,” Australian Government, October 2023, https://www.esafety.gov.au/industry/basic-online-safety-expectations/responses-to-transparencynotices/ non-periodic-notices-issued-Februrary-2023-key-findings

[32] Information Technology Act 2000, Ministry of Electronics and IT, Government of India, https://www.meity.gov.in/content/information-technology-act-2000

[33] Information Technology Act 2000

[34] IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, Ministry of Electronics and IT, Government of India, https://www.meity.gov.in/content/information-technology-intermediary-guidelines-and-digitalmedia- ethics-code-rules-2021

[35] Aaradhya Bachchan v Bollywood Time, CS (Comm.) No.23 of 2023, 2023 SCC OnLine Del 2268

[36] The Hon Michelle Rowland MP, Minister for Communications, “Online Safety Expectations to Boost Transparency and Accountability for Digital Platforms,” https://minister.infrastructure.gov.au/rowland/media-release/online-safety-expectations-boosttransparency- and-accountability-digital-platforms

[37] eSafety Commissioner, “Learn about the Online Safety Act,” Australian Government, https://www.esafety.gov.au/whats-on/online-safety-act#:~:text

[38] “France: Constitutional Court Strikes Down Key Provisions of Bill on Hate Speech,” Library of Congress, June 29, 2020, https://www.loc.gov/item/global-legal-monitor/2020-06-29/france-constitutional-court-strikesdown- key-provisions-of-bill-on-hate-speech/

[39] “France’s Watered Down Anti-Hate Speech Law Enters into Force,” Universal Rights Group, July 16, 2020, https://www.universal-rights.org/frances-watered-down-anti-hate-speech-law-enters-into-force/

[40] “The UN Secretary-General Remarks to Launch the Global Principles for Information Integrity,” United Nations Information Service, June 2023, https://www.un.org/en/UNIS-Nairobi/un-secretary-general-remarks-launch-global-principlesinformation- integrity#:~:text=

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Anulekha Nandi

Anulekha Nandi

Dr. Anulekha Nandi is a Fellow - Centre for Security, Strategy and Technology at ORF. Her primary area of research includes digital innovation management and ...

Read More +
Anirban Sarma

Anirban Sarma

Anirban Sarma is Director of the Digital Societies Initiative at the Observer Research Foundation. His research explores issues of technology policy, with a focus on ...

Read More +