From deepfakes to chatbots, jihadist groups are exploiting AI to radicalise young minds and outpace counterterror efforts online.
Image Source: Getty Images
In the evolving landscape of global security, technological innovation is no longer the preserve of state actors or corporate giants. Extremist groups, particularly jihadist networks, have increasingly begun to use advanced tools and artificial intelligence (AI) to bolster their recruitment, indoctrination, and propaganda dissemination strategies. As digital natives mature into their roles as both architects and targets of the online world, jihadi groups are adapting at an alarming speed, leveraging AI not only to reach broader audiences but also to outpace traditional counterterrorism efforts.
The deployment of AI by jihadist organisations marks a notable shift from conventional digital propaganda tactics to more dynamic and personalised forms of engagement. These groups are no longer merely posting videos or PDFs on obscure forums; they are beginning to incorporate AI tools that mimic marketing strategies used by commercial entities.
Jihadi groups are adapting at an alarming speed, leveraging AI not only to reach broader audiences but also to outpace traditional counterterrorism efforts.
For instance, Europol’s 2022 report on “Digitalisation and Terrorism” warned of extremist experimentation with AI for automatic content generation, facial recognition avoidance, and trend analysis. While many jihadist factions may not possess in-house AI expertise, they frequently exploit open-source AI tools and darknet services, enabling the automation of recruitment messages and targeted outreach with minimal human intervention.
In 2025, radicalisation efforts are increasingly aimed at the younger generation, often referred to as ‘digital natives’. These individuals, who have grown up with social media, gaming, and personalised content algorithms, are susceptible to AI-powered recruitment strategies.
Anwar al-Awlaki, an American-Yemeni cleric and prominent English-language propagandist for Al-Qaeda, remains a key figure for online radicalisation programmes. His translated extremist content continues to circulate widely and is frequently referenced in jihadist indoctrination. Using AI-enhanced recommendation systems, extremist content is often resurfaced in autoplay suggestions or search result manipulations due to algorithmic patterns, especially when aligned with specific keywords, making it difficult for platforms to thoroughly purge radical content.
Moreover, AI tools help recruiters simulate legitimate interactions through seemingly innocuous profiles, blogs, or chat rooms. By mimicking influencers or interest-based communities, extremists gain the trust of users before subtly introducing radical narratives. These bots mimic human conversation and adjust their responses based on emotional and linguistic cues, creating a false sense of trust and validation for vulnerable individuals.
Perhaps one of the most concerning developments is the potential use of deepfakes and synthetic media by jihadist groups. With simple tools, extremist actors generate compelling videos that impersonate prominent religious scholars or state leaders, either to sow disinformation or to glorify martyrdom.
Terrorist groups are increasingly exploiting social media through generative AI to scale propaganda, manipulate users, and recruit followers. They create hyper-realistic fake images, videos, and text, such as scenes of injured children or fabricated attacks that evoke strong emotions and appear authentic, making detection and moderation difficult.
These bots mimic human conversation and adjust their responses based on emotional and linguistic cues, creating a false sense of trust and validation for vulnerable individuals.
Deepfake technology also poses a significant risk to counterterrorism by masking the identity of jihadi propagandists, rendering facial recognition tools less effective.
Groups such as the Islamic State of Iraq and Syria (ISIS), Al-Qaeda, Hamas, and Hezbollah are already using such tools to spread tailored messages rapidly across multiple platforms and languages. AI-powered chatbots further enhance recruitment efforts by simulating real-time, personalised conversations.
Terrorists create hyper-realistic fake images, videos, and text, such as scenes of injured children or fabricated attacks that evoke strong emotions and appear authentic, making detection and moderation difficult. Extremist recruiters exploit trending hashtags and hijack comment sections of viral content to insert propaganda. Some groups go even further, using AI-driven analytics to monitor engagement metrics such as click-through rates, drop-off points, and user sentiment analysis, allowing them to optimise their messaging with precision once reserved for professional political campaigns.
When removed from mainstream platforms, these terror groups shift to less-regulated spaces, such as Telegram or Justpaste.it, where AI-generated content spreads rapidly and swiftly. Additionally, fake personas, powered by Generative Adversarial Networks (GAN)-generated faces, are deployed to spread extremist content through coordinated disinformation campaigns, blending into online communities. This convergence of generative AI, algorithmic amplification, and behavioural analytics has turned social media into a highly efficient, low-cost weapon in the hands of extremist networks.
The blurring line between digital entertainment and ideological indoctrination presents another threat: the gamification of extremism. Jihadi groups have long admired the immersive qualities of video games, and now, with AI-backed game design tools becoming more accessible, there is genuine concern that virtual environments could be developed to train or radicalise youth in engaging ways. For instance, ISIS has previously encouraged sympathisers to modify popular first-person shooter games such as Call of Duty by inserting Islamic State symbols in games.
AI-enabled crypto trading bots have been reported in fundraising operations. The Financial Action Task Force (FATF) and other financial watchdogs flagged suspected terror funding through AI-enabled automation. Additionally, terrorists in India used PayPal and Amazon to channel funds and procure materials for attacks, including components for the Pulwama bombing.
AI-driven chatbots represent one of the most scalable tools in the radicalisation toolkit. By employing natural language processing (NLP) and encryption, extremists can deploy 24/7 recruitment interfaces that simulate human conversation and are nearly impossible to monitor effectively.
A study by the Global Disinformation Index warned of the growing trend of chatbot radicalisation, where users could engage in prolonged, increasingly radical dialogue with an automated persona. These bots can mimic religious mentors, offer personal guidance, and answer theological questions, all while reinforcing extremist ideology.
Unlike human recruiters, AI bots can simultaneously guide hundreds, if not thousands, through a pipeline of ideological grooming.
The intersection of AI and jihadist propaganda poses challenges for global counterterrorism agencies. Traditional monitoring techniques—such as keyword tracking or account flagging—are becoming increasingly ineffective. Another pressing issue is the erosion of content hashing systems, as generative AI produces endless variants of extremist material that evade detection by conventional hash-matching tools. Additionally, AI enables hyper-personalised, data-driven messaging, allowing extremist groups to craft narratives based on user behaviour, sentiment, and geographical location.
The use of deepfakes and synthetic media further complicates matters, offering deniability by obscuring the origin, creator, and intent behind content, thereby undermining legal accountability. Regulatory inconsistencies and varying platform vulnerabilities exacerbate the problem, with moderation efforts often hindered by human mediators lacking the necessary linguistic or technical expertise. Finally, the use of personalised extremist content tailored to individual profiles based on browsing history or geography makes counter-radicalisation efforts more complex and resource-intensive.
To strengthen counterterrorism (CT) capabilities in the AI era, one critical step is to employ next-generation AI-powered content moderation systems. These tools analyse semantic patterns, linguistic markers, and contextual cues to detect and flag AI-generated propaganda. Organisations such as Tech Against Terrorism, in collaboration with Microsoft, have begun supporting smaller platforms in integrating such capabilities.
The expansion of cross-platform threat intelligence through initiatives such as the Global Internet Forum to Counter Terrorism (GIFCT) and the Global Network on Extremism and Technology (GNET). These frameworks aim to incorporate semantic hashing and AI fingerprint-sharing, detect extremist content, regardless of where it first appears. There is also growing advocacy for centralised multilingual content databases that can track AI-generated extremist material.
In parallel, human-in-the-loop systems are proving essential. Projects such as Investigative Pattern Detection Framework for Counterterrorism INSPECT combine machine learning based behavioural analysis with expert human oversight to detect early signs of radicalisation. This hybrid approach balances the speed of automation with the nuance of human judgment, particularly important when dealing with content that operates in linguistic or cultural grey areas.
Lastly, governments must implement unified, cross-border regulations to address AI-enabled threats. This includes training human moderators in under-resourced languages, mandating transparency in AI content origin, and equipping institutions with the legal and technical infrastructure to act swiftly.
For example, the Government of India has updated its IT Act 2000 and notified the Information Technology Rule 2021 (‘IT Rules, 2021) to implement the use of automated tools to detect harmful content, while the National Cyber Coordination Centre has been instrumental in blocking 295,000 fake Subscriber Identification Module (SIM) cards, 46,000 International Mobile Equipment Identity (IMEI), and over 2,800 websites/URLs, 595 mobile applications by 2025. Similarly, Australia, the United States (US) and the United Kingdom (UK) mandate platform accountability for algorithmic amplification of harmful content.
Collectively, these strategies enhanced detection tools and aligned policy reform to form a robust response to the evolving misuse of AI by extremist actors.
The integration of AI into jihadist recruitment strategies marks a dangerous evolution in the threat landscape. By targeting digital natives with algorithmically enhanced propaganda, leveraging deepfakes, and deploying scalable tools such as encrypted chatbots, extremist organisations are exploiting the very technologies designed to connect and inform.
The integration of AI into jihadist recruitment strategies marks a dangerous evolution in the threat landscape. By targeting digital natives with algorithmically enhanced propaganda, leveraging deepfakes, and deploying scalable tools such as encrypted chatbots, extremist organisations are exploiting the very technologies designed to connect and inform.
While governments, tech companies, and civil society must urgently coordinate to confront this challenge, it is only through intelligent collaboration between states, platforms, regulators, and civil society that the tide of AI-powered jihadism can be curbed before it irreversibly reshapes the global security landscape. Ultimately, the fight against AI-powered jihadism will require not just more innovative tools but also a deeper understanding of the social and psychological factors that drive radicalisation in the digital era.
Soumya Awasthi is a Fellow with the Centre for Security, Strategy, and Technology at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Dr Soumya Awasthi is Fellow, Centre for Security, Strategy and Technology at the Observer Research Foundation. Her work focuses on the intersection of technology and national ...
Read More +