Algorithms on social media now fuel radicalisation and disinformation—posing a growing national security threat that regulators are racing to contain.
Image Source: Pexels
In an age where perception can define power, social media platforms once heralded as instruments of democratisation have become dangerous vectors of polarisation and extremism. At the heart of this transformation lies an often-overlooked agent: the algorithm. In an unprecedented move that could redefine digital accountability in Europe, French prosecutors have launched a formal criminal investigation against X (formerly Twitter) over allegations of “organised algorithmic manipulation” and “fraudulent data extraction”. Initiated in July 2025, the probe—now led by the J3 cybercrime unit and the National Gendarmerie—alleged that the platform’s recommendation algorithms had been altered to restrict content variety and potentially amplify foreign propaganda with numerous racist contents against LGBT and the fundamental right to speech and rights to privacy.
As state and non-state actors weaponise the digital space, the algorithm is no longer just a tool; it is a threat surface.
The challenge faced by France is universal; intended initially to customise content and augment user engagement, algorithmic recommendation systems have now emerged as significant, if unintentional, enablers of radicalisation. As state and non-state actors weaponise the digital space, the algorithm is no longer just a tool; it is a threat surface.
According to DataReportal's Global Digital Insights, by April 2025, approximately 5.31 billion people worldwide will use social media platforms such as Facebook, Telegram, Instagram, and X (formerly Twitter) on a daily basis. These social media platforms now shape political discourse, social cohesion, and even national security. The urgency to regulate or redesign these algorithms has never been greater.
At their core, algorithms are mathematical functions programmed to curate content based on user behaviour. They observe what people click, watch, like, and share and then deliver more of the same. This process is known as ‘reinforcement learning’. Although it may appear to be harmless and efficient, in reality, the algorithm does not differentiate between violent and non-violent extremist propaganda; it optimises for attention.
Extremism thrives on engagement, and algorithms are built to treat engagement above all else. This intersection creates what can only be termed a ‘radicalisation spiral’, a feedback loop where the user is slowly pushed from curiosity to confirmation to extremism.
Research from the Stanford Internet Observatory has consistently shown that when a user interacts with contentious, provocative, or conspiratorial content, they are more likely to be drawn towards radical content. These are part of a structural trend in how algorithmic logic functions.
Extremism thrives on engagement, and algorithms are built to treat engagement above all else. This intersection creates what can only be termed a ‘radicalisation spiral’, a feedback loop where the user is slowly pushed from curiosity to confirmation to extremism.
The traditional models of radicalisation involving physical safe havens, personal contact with recruiters, or closed-door indoctrination have evolved. Today, initial exposure and ideological grooming often occur online and are facilitated algorithmically. The first point of ideological contact is mainly through a recommended post, a trending video, or a viral meme.
Extremist actors are highly adaptive and understand the algorithm. They know how to utilise emotional keywords, trending hashtags, manipulated images, and real-time disinformation to capitalise on the wave of recommendations. Social media platforms do not just reflect public opinion; they shape it. When extremist groups take control of the shaping process, it leads to widespread cognitive manipulation.
The implications for national security are profound and multifaceted. It has often caused communal violence, religious polarisation, and created distrust in democratic institutions through digital echo chambers at the domestic level, threatening the internal security of a nation. For example, the Delhi Riots around the Citizenship Amendment Act–National Register of Citizens (CAA-NRC) in 2020. These phenomena undermine the cohesion of pluralistic societies, especially in fragile democracies and multi-ethnic states.
Terrorist groups have strategically abused social media algorithms to amplify their propaganda, recruit followers, and incite cross-border radicalisation. This phenomenon is evident in regions including Kashmir and Punjab, where online platforms have become a hub for extremist narratives. In Kashmir, groups such as Kashmir Tigers or the Resistance Front (TRF)—a Pakistan-sponsored proxy group—have skillfully adapted their communication methods for the digital age, producing high-quality visual content, utilising hashtags, and engaging in narrative warfare that aligns with algorithmic trends to ensure vast visibility. By representing militancy as resistance, the groups have succeeded in radicalising youth. Similarly, elements within the Khalistan movement have weaponised social media to revive separatist sentiments in Punjab, especially among segments of the Sikh diaspora in Canada, the United Kingdom (UK), and Australia. Their campaigns often rely on emotionally charged content, historical grievances, and misinformation, algorithmically boosted to appear more prominently in user feeds. The virality afforded by algorithmic amplification has enabled these fringe ideologies to gain traction, blurring the line between organic dissent and state-sponsored digital subversion.
The regulation of algorithms faces several interlinked challenges. The most frequently confronted challenge is a lack of transparency, as algorithms often operate as ‘black boxes’, making it difficult even for developers to explain their outcomes. Corporate secrecy further blocks audits and accountability.
Social media platforms do not just reflect public opinion; they shape it. When extremist groups take control of the shaping process, it leads to widespread cognitive manipulation.
Transnational jurisdictional gaps continue as global technology firms operate across borders, while legal frameworks remain rooted in national systems, creating enforcement loopholes, particularly for countries in the Global South. National laws hardly address algorithmic decision-making directly; anti-discrimination and IT laws tend to offer vague definitions, making bias without intent difficult to prove or penalise. Combining with this is the limited technical capability in developing states, where controllers and law enforcement agencies often lack trained personnel and forensic tools to counter algorithm-driven radicalisation or discrimination. Furthermore, algorithms usually inherit and exacerbate existing social biases embedded in historical data, resulting in unfair outcomes across various domains, including employment, justice, credit, and content curation. Efforts to regulate these harms often clash with other rights, including privacy, free expression, and intellectual property, making governance more complex. Meanwhile, public understanding remains low, with most users being uninformed about how algorithms shape their perceptions or how to challenge discriminatory outcomes.
Therefore, an all-inclusive global strategy is essential to address the growing risks of algorithmic manipulation and online radicalisation. A Global Algorithmic Transparency Framework under the aegis of the United Nations (UN) or the Organization for Economic Cooperation and Development (OECD) should mandate the disclosure of algorithm design and risk audits, particularly for Very Large Online Platforms (VLOPs). In tandem, a Digital Threat Intelligence Sharing mechanism—modelled on Interpol—can enable real-time detection and cross-border alerts on extremist content using shared data and Artificial Intelligence (AI) tools. Legal provisions must also evolve, and manipulating algorithms to amplify hate or radical ideologies should be criminalised globally, making platforms and executives legally liable. Equally important is the Cross-Border Harmonisation of AI Laws through an international convention, such as the precedent set by the France AI Action Summit in February 2025, to align enforcement standards. At the national level, Digital Threat Centres should be established, bringing together expertise in technology, behavioural science, and intelligence to detect and disrupt radicalisation pathways. Independent audits must be mandated to regularly assess algorithmic harms, following models such as the European Union’s Digital Services Act (2023), which allows for third-party scrutiny of platform algorithms.
Furthermore, investment in Red-Teaming and Stress Testing of algorithms is necessary to anticipate misuse by terrorists or hate groups. This should be complemented by a Real-Time Monitoring Infrastructure equipped with automated tools and trained analysts to respond to live threats, especially during volatile periods such as elections or communal unrest. Finally, Public Digital Literacy Campaigns are vital to build societal resilience by educating communities, particularly the youth, on recognising and resisting algorithmic manipulation.
The regulation of algorithmic systems, especially in the context of radicalisation, discrimination, and geopolitical conflict, is no longer a future problem. It is a present-day governance crisis, particularly for developing democracies. The opacity of proprietary algorithms—combined with jurisdictional ambiguity and resource limitations—has created a regulatory blind spot exploited by both extremist networks and adversarial states.
Effective regulation will require a layered approach: global cooperation for standard-setting, national legal reform to address domestic gaps, and local capacity-building to detect and counter harms in real-time. Most critically, algorithmic harms must be treated not just as technical glitches or corporate oversights, but as issues of security, civil rights, and public trust. Policymakers, tech companies, civil society, and academia must collaborate to create an algorithmically safe and fair digital future.
Soumya Awasthi is a Fellow with the Centre for Security, Strategy, and Technology at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Dr Soumya Awasthi is a Fellow, Centre for Security, Strategy and Technology at the Observer Research Foundation. Her work focuses on the intersection of technology and ...
Read More +