Author : Erin Saltman

Expert Speak War Fare
Published on Feb 21, 2024

As terrorists and violent extremists adapt their online tactics in the era of AI, counterterrorism efforts should also evolve, employing new tools and approaches

The future of counterterrorism: Evolving online tools and tactics

This article is part of the series—Raisina Edit 2024


Counterterrorism and counter-extremism efforts, both offline and online, must evolve to stay ahead of threats in the age of artificial intelligence (AI). However, as we advance tools and approaches, it is important to maintain and strengthen tried and tested frameworks and partnerships.

The threat landscape online

The Global Internet Forum to Counter Terrorism (GIFCT) convenes its multistakeholder community, through its programmes, Working Groups, and research arm—the Global Network on Extremism and Technology (GNET)—to identify changes in patterns of terrorist and violent extremist content (TVEC) online and scan for future threats.

AI will continue to be at the forefront of discussions in 2024. Big tech companies such as Microsoft, Meta, and YouTube have outlined how they intend to build AI responsibly as part of their wider safety efforts. However, the open sourced and wider accessibility of AI tools means that there is a new wave of anxiety about how bad actors will be adopting new tactics. For example, GIFCT mapped how generative AI models can be exploited, highlighting risks associated with synthetic audio, video, and image content.

The increase in nationally-focused regulatory frameworks in a system where terrorist networks are both cross-platform and transnational also makes unified approaches to content regulation difficult.

Adapting to the evolution of social media use will be a continuous struggle for law enforcement and individual platforms’ security teams, particularly in responding to threat signals across platforms. The increase in nationally-focused regulatory frameworks in a system where terrorist networks are both cross-platform and transnational also makes unified approaches to content regulation difficult.

Online and offline counterterrorism efforts are even more difficult to disentangle from the potential increased exploitation of 3D printing technology and unmanned aerial vehicles (UAVs). Since 2019, there have been at least nine documented cases of terrorists or violent extremists using 3D printing in attempting to develop guns, largely from white supremacy networks. Drones and other UAVs have also seen increased usage by Al-Qaeda, the Islamic State , and Al Shabaab in Africa. How to police the sharing of 3D printing instructions for weapons or the sale of drones for exploitative purposes remains underdeveloped, though recent initiatives—like the UN Delhi Declaration, announced in October 2022, on countering the use of new and emerging technologies for terrorist purposes and the UN Abu Dhabi Guiding Principles, released in December 2023, on the threats posed by the use of unmanned aircraft systems for terrorist purposes—are a start.

Tools of today

There are three layers to effective counterterrorism and counter-extremism efforts online: in-platform safety efforts; platform partnerships with a third party; cross-platform or internet-wide solutions.

Larger companies have implemented in-platform tools for countering terrorism, such as image and video matching, detecting recidivism, using AI for language understanding, and employing Strategic Network Disruptions (SND).

Individual tech companies’ moderation and legal compliance efforts to identify and remove violating content is reflected in their public policies, user safety centres, and transparency reports. Larger companies have implemented in-platform tools for countering terrorism, such as image and video matching, detecting recidivism, using AI for language understanding, and employing Strategic Network Disruptions (SND). However, allocating resources to develop safety tools while ensuring adequate human resources to manage such tools—including the need for geographic coverage and subject matter expertise—can be a challenge, especially when companies are managing multiple security risks.

Partnerships between a platform and third parties to enhance counterterrorism and counter-extremism efforts include “trusted flagger programmes” to assist in flagging URLs or other violative content, engaging services from vendors, such as SITE, Flashpoint, Jihadoscope, and Memri, and through government-funded public-private partnerships such as the Terrorist Content Analytics Platform (TCAP). Platform partnerships have also advanced methods for positive interventions. In these cases, the scale is limited to one platform and it undertakes sensitive partnerships with NGOs, relying on nuanced content developers and strategic communication. To advance effectiveness, wider positive intervention strategies are needed and must include diverse social platforms, gaming platforms, and online marketplaces,

Scaled and future-proofed solutions for preventing and responding to TVEC must recognise its cross-platform and transnational nature. GIFCT has developed cross-platform solutions that are both feasible and scalable as threats augment, and is accessible to companies of all sizes. Since 2018, Hash Sharing Technology has been developed with GIFCT member companies to share signals relating to TVEC.

The GIFCT Hash Sharing Database (HSDB) has evolved its taxonomy and technical capacities twice since its launch to ensure it is reflective of the threat confrontingplatforms and remains integrated with the approaches internally developed by platforms. To do this, GIFCT member companies must agree on definitions and frameworks for TVEC inclusion, already an achievement in a space lacking international consensus. The HSDB was founded on agreement to share hashed content related to entities on the United Nations Security Council Consolidated List established by Resolution 1267. However, many lone attackers, and national violent extremist groups never make it onto this list.

The GIFCT Hash Sharing Database (HSDB) has evolved its taxonomy and technical capacities twice since its launch to ensure it is reflective of the threat confrontingplatforms and remains integrated with the approaches internally developed by platforms.

In the aftermath of the Christchurch terrorist attack in 2019 in New Zealand, GIFCT developed an Incident Response Framework and expanded the HSDB to include perpetrator content associated with its Content Incident Protocol. In 2021, responding to international concerns of biases in government designation lists and seeing increases in lone actor white supremacy attacks, GIFCT again expanded its taxonomy to include hashes of attacker manifestos and branded TVEC. Expansions also required a technical update to include further types of “content”. The HSDB can now share hashes not just of images and video, but also PDFs, URLs, and audio files. 

Solutions for the future 

Beyond content, how bad actors operate online can include everything from understanding what a social network of user behavior looks like, to financial transactions, and coded language to hide violent intentions. Expanding cross-platform threat detection beyond content-centered signals will be critical but will need a high degree of multi-stakeholder engagement to ensure counterterrorism efforts are proportionate and do not impede on human rights.

Future-proofing counterterrorism and counter-extremism efforts online relies on a combination of embracing AI safety tools, expanding what signals can be shared across platforms, and using multi-layered threat detection models. AIand Machine Learning are already in use for counterterrorism and should be expanded to assist in reaching the scale and speed of online TVEC dissemination. Synthetic and AI-generated content by terrorists and violent extremists are already part of the inclusion parameters for the HSDB, but GIFCT plans to review inclusion criteria so that they are fit for purpose and question whether new forms of content should be added. Hashing is the most effective and tested method for sharing content signals between companies. As long as AI or user-generated content remains an issue, hashing will continue to be an important cross-platform tool for companies to share signals facilitating the proactive surfacing and removal of violating content.

AIand Machine Learning are already in use for counterterrorism and should be expanded to assist in reaching the scale and speed of online TVEC dissemination.

Security efforts are also additive. The revelation of a new tool or approach rarely makes previous tools and partnerships obsolete. As GIFCT’s Director of Technology, Tom Thorley, explains, “Just because you invent an airbag for cars, doesn’t mean you get rid of seatbelts.” More complex counterterrorism approaches can layer algorithmic processes. GIFCT technical trials showed that combining tools and using layered signal methodologies decreased false/positive rates for surfacing TVEC. Online safety methodologies work best as hybrid models, where human oversight works with algorithmic advances to build, refine, and innovate systems for countering terrorism and violent extremism online. 

Multistakeholderism andvoluntary frameworks

No single state or sector can address the widespread challenges posed by terrorist and violent extremist content online. In evolving security approaches, multistakeholderism will be necessary to ensure that counterterrorism efforts are definable, defendable, scalable, proportionate, and in keeping with human rights considerations and international legal obligations. GIFCT’s Human Rights Impact Assessment in 2021 was carried out to identify and strengthen human rights within counterterrorism work while understanding that the protection and promotion of human rights—meaning the rights of the victims of terrorism and violent extremism, and impacted communities—is central to effective and sustainable counterterrorism efforts. Part of ensuring cross-platform counterterrorism work aligns with the protection of human rights is to bring a wider diversity of platforms together, showcasing the heterogeneity of the internet, as well as providing space for governments, the private sector, and civil society to share knowledge,such as the Raisina Dialogue provides in Delhi each year. Tech companies are looking for guidance from governments and experts on topics such as borderline content and what is meant by meaningful transparency. The continued interaction between these communities is critical to ensuring that counterterrorism efforts reflect the needs and risks confronting them. 

As terrorists and violent extremists evolve their online tactics, so too must practitioners, platforms, and governments. This is best done by working together, sharing knowledge, and finding common ground to advance efforts. 


Erin Saltman is the Membership and Programs Director at the Global Internet Forum to Counter Terrorism (GIFCT).

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.