Image Source: gorodenkoff/ via Getty Images
AI can transform India’s counter-terrorism response, but the real impact depends on institutional reform, inter-agency coordination, and global partnerships.
The intersection of Artificial Intelligence (AI) and counter-terrorism (CT) represents a transformative shift in how modern states anticipate, detect, and neutralise threats. As terrorism evolves in both scale and sophistication—leveraging encrypted communications, digital propaganda, drone technology, and decentralised networks—states must adopt equally agile and anticipatory mechanisms. For a country such as India, facing multi-layered security threats from cross-border terrorism, insurgency, and cyber-radicalisation, AI, with its capacity to analyse vast datasets, identify patterns, and support real-time decision-making, can become a central pillar in its CT framework.
Over the past few years, AI has evolved as a double-edged sword in CT: being abused by violent non-state actors and harnessed by state actors. Non-state actors have exploited AI for recruitment, training, propaganda, and radicalisation. Organisations such as al-Qaeda and the Islamic State have used AI-generated deepfake content and encrypted communications to radicalise followers and coordinate terror attacks.
As terrorism evolves in both scale and sophistication—leveraging encrypted communications, digital propaganda, drone technology, and decentralised networks—states must adopt equally agile and anticipatory mechanisms.
Pakistan-sponsored groups such as the Resistance Front and Kashmir Tigers have adopted AI-enabled technologies, including media campaigns. Similarly, the Houthis used drones in 2021 to attack Saudi Arabia’s oil infrastructure. The 2018 swarm drone assault on Russian military assets by the group Ahrar-al-Sham illustrates how accessible off-the-shelf technologies such as drones, encrypted apps, and other AI tools are reforming asymmetric warfare.
AI is rapidly altering CT by empowering faster threat detection, real-time monitoring, and predictive risk assessment. Its amalgamation into CT operations improves the accuracy and timeliness of both preventive and responsive measures.
1. Predictive Analytics and Behavioural Modelling
AI can identify acts of terrorism by modelling human behaviour and analysing vast datasets, leveraging Natural Language Processing (NLP) and machine learning algorithms. Such models, through semantic analysis, can identify the relationship between the context and the coded language, which are often used by terrorist networks to dodge detection. Similarly, applying a predictive analytics system enables intelligence agencies to detect early warning signs, profile high-risk individuals, and simulate scenarios, thereby facilitating strategic decision-making.
2. Automated Surveillance and Facial Recognition
A computer vision system can revolutionise surveillance by providing real-time monitoring capabilities through facial recognition, object detection, and activity pattern analysis. These systems can be installed in sensitive areas and border regions to track suspicious behaviours and detect weapons, correlating them with known datasets, thereby reducing response time and human error.
3. Drone Detection and Counter-UAV Systems
Counter-drone and Unmanned Aerial Vehicle (UAV) frameworks could integrate AI to mechanise threat assessment, defusing the target to counter and defend areas susceptible to aerial infiltrations. The technology could be used to classify drones as commercial, civilian or hostile based on data related to speed, altitude, and payload characteristics.
4. Social Media Monitoring and Online De-radicalisation
With the increasing use of social media by terrorists for recruitment, radicalisation and dissemination of propaganda, AI systems offer tools such as automated language translation for monitoring social media activity, detecting radical discourse, and mapping networks of influence. Furthermore, AI can support cognitive interventions by recommending alternative content to individuals as counter-radicalisation resources.
5. Financial Intelligence and Cryptographic Tracking
Tools such as Symphony AI can be used to detect illegal financial transactions by terrorists. Similarly, Silent Eight is another company that combats financial crimes by using machine learning algorithms to detect and decline irregular financial activity across the banking sector.
6. Cyber Threat Detection
As terrorist elements increasingly exploit cyber domains to target critical infrastructure and institutions, AI-enabled cyber defence systems offer early detection and intrusion detection systems that continuously update threat intelligence libraries, enabling them to pre-empt attacks on government, military, and civilian targets.
State actors have adopted AI to enhance national security strategies. For instance, Israel has become one of the foremost users of AI in live conflict zones. During its 2023 Gaza operations, Israeli defence forces employed systems such as ‘Lavender’, ‘Where’s Daddy’, and ‘The Gospel’ for real-time surveillance, target selection, and operational decision-making, marking a momentous evolution in algorithmic warfare.
Policymakers then identified key points of disruption and adjusted their responses accordingly, illustrating the utility of AI in cyber-CT.
The United States (US) Department of Defence launched Project Maven, using AI to analyse drone footage to identify hostile movements in regions such as Iraq, Syria, and Yemen. AI-integrated radar systems can detect and track hostile drones, classifying them based on their flight patterns, altitudes, and payloads. North Atlantic Treaty Organization's (NATO) DEXTER project is another example where AI identifies armed suspects and UAV threats in crowded environments. Similarly, the AI model SOMA (Stochastic Opponent Modelling Agents) was used by the University of Maryland in the US to study Lashkar-e-Taiba (LeT) and predict how the group would react in specific scenarios. Policymakers then identified key points of disruption and adjusted their responses accordingly, illustrating the utility of AI in cyber-CT.
The United Kingdom (UK), meanwhile, has advanced its capabilities in digital counter-radicalisation. The government’s partnership with private companies has enabled the proactive removal of harmful content through platforms such as Moonshot CVE’, which redirects users seeking extremist content to prevention portals. Similarly, Europol’s ‘SIRIUS’ platform integrates AI to trace terror funding via virtual assets, analysing blockchain transactions, detecting suspicious patterns, and linking wallets to real identities by combining crypto data with social media and Internet Protocol (IP) logs.
Meanwhile, France's anti-terrorism apparatus incorporates the ‘Pharos’ platform—a collaborative AI tool developed in conjunction with social media companies that flags extremist content—allowing French law enforcement to intervene and map digital ecosystems of potential offenders.
India has witnessed a progressive rise in AI-based counter-terrorism initiatives, demonstrating optimism in integrating technology into its internal security architecture. Initiatives such as the Crime and Criminal Tracking Network System (CCTNS), the National Automated Facial Recognition System (AFRS), and the National Intelligence Grid (NATGRID) reflect an institutional shift towards digitised security management. The Defence Research and Development Organisation’s (DRDO) surveillance platform, NETRA (NEtwork TRaffic Analysis), has enabled Indian agencies to monitor encrypted communication and identify early threat signals. During the 2024 Akhnoor encounter, unmanned ground vehicles with semi-autonomous navigation supported troop operations. Additionally, anti-drone systems, such as Indrajaal used by the Indian Navy during Operation Sindoor in Gujarat, and Skynet Intel deployed by the Border Security Force, represent significant advancements in perimeter defence and aerial surveillance.
While the Ministry of Electronics and Information Technology’s 2020 ‘Responsible AI for Social Empowerment’ (RAISE) initiative focuses on development goals, it lacks operational relevance for national security. Instead, the Ministry of Defence’s AI strategy and institutional reforms, such as the creation of the Defence AI Council and Defence AI Project Agency, mark more tangible steps toward integrating AI in counter-terrorism and strategic domains. However, AI adoption within internal security agencies still suffers from limited forensic capability, inter-agency data silos, and inadequate training in threat detection tools, as these often lack integration with AI.
Anti-drone systems, such as Indrajaal used by the Indian Navy during Operation Sindoor in Gujarat, and Skynet Intel deployed by the Border Security Force, represent significant advancements in perimeter defence and aerial surveillance.
To enhance national preparedness, India must develop a unified national mission for AI in security. This should be modelled on the National Mission for AI but explicitly tailored for counter-terrorism, internal security, and border management, collaborating with all intelligence agencies. A critical area requiring focus is the establishment of regional AI-integrated task forces. These units must be trained in the use of AI for identifying threat patterns from different platforms.
Indian investment in AI research and development is currently significantly low compared to other nations, accounting for only 0.6 to 0.7 percent of the national Gross Domestic Product (GDP), which needs to be addressed at the earliest opportunity, given the changing nature of the threat. Although the Defence Research and Development Organisation (DRDO) and Bharat Electronics Limited (BEL) have already begun working on AI-based security tools, India needs more such initiatives than one can imagine.
Another crucial challenge is language diversity in India; therefore, effective AI monitoring for counter-radicalisation must account for linguistic variations. In India, terrorist content often escapes AI moderation because it appears in non-Latin scripts (such as Hindi, Bengali, and Urdu) or code-mixed languages (for example, Hindi written in English letters). Global AI tools are not well-trained to detect such formats. As a result, harmful messages can spread undetected. Initiatives to develop regional language AI models must be prioritised in collaboration with Indian AI research centres.
India should actively participate in international collaborations, such as the United Nations Counter-Terrorism Committee and the Global Counterterrorism Forum, as well as frameworks for cooperation led by the Five Eyes Alliance, including Signal Intelligence (SIGINT), Human Intelligence (HUMINT), and Geospatial Intelligence (GEOINT). Bilateral cooperation with technologically advanced nations, such as the United Arab Emirates, Israel, Australia, and France, can help India acquire and adapt best practices. On the legal front, India’s Information Technology Act and the Unlawful Activities (Prevention) Act (UAPA) must be amended to include provisions for AI-generated evidence, cross-border data access, and enhanced privacy protocols.
Bilateral cooperation with technologically advanced nations, such as the United Arab Emirates, Israel, Australia, and France, can help India acquire and adapt best practices.
While terrorists continue to find creative applications for AI, state systems have increasingly adopted artificial intelligence as a vital asset for early threat detection, operational efficiency, and proactive engagement. Furthermore, AI has the potential to revolutionise how India approaches counter-terrorism. By shifting the focus from reactive to predictive security strategies, AI offers the opportunity to pre-empt threats, monitor evolving risks, and protect critical assets. However, this potential can only be realised through a holistic approach that includes institutional reform, strategic investments, ethical oversight, and international partnerships. For a nation such as India, grappling with diverse and dynamic security challenges, AI is not merely a tool of the future—it is a strategic imperative of the present.
Soumya Awasthi is a Fellow with the Centre for Security, Strategy, and Technology at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Dr Soumya Awasthi is a Fellow, Centre for Security, Strategy and Technology at the Observer Research Foundation. Her work focuses on the intersection of technology and ...
Read More +