The lines between offline and online conduct are becoming increasingly blurred, with online rhetoric affecting offline conduct, and vice versa. The “internet age,” allows ideas, content, and information to be disseminated with unprecedented speed. Since the cyberspace has become an integral part of everyday life, it is important to study its dynamics with and effects on the offline space.
Recent instances of terrorism have revealed the growing use of cyberspace in terror attacks. The West is thus increasingly aware of the need to change existing legislation pertaining to online conduct and safety, as well as overseas counterterrorism strategies and operations. Both online and offline strategies must cover a broader scope to make them more holistic and comprehensive. The new policy changes encompass not only Islamic extremism but also home-bred, violent right-wing extremism.
There are two aspects of terrorism, online narrative and on-ground violence, connected by a common thread: the rule of law. While the former does influence the growth of radicalisation, counterterrorism efforts primarily deal with on-ground battles between terror groups and state actors. As part of the War on Terror, non-partisan frontline response organisations, such as the International Committee of the Red Cross, aim to minimise civilian casualties in war zones, producing tangible results. Civilian casualties are caused not only by terrorist or extremist violence but also the government’s actions to counter them. In conflict zones, the rule of law is often blurred by all parties involved. It is thus important for governments to respect the rule of law while formulating counterterrorism policy, offline as well as online. States committing abuses or human rights violations in the name of counterterrorism, whether online or offline, leads to a spike in retaliatory violence, creating a vicious cycle.
Governments, tech companies as well as civil society organisations (CSOs) acknowledge that online extremist propaganda is widespread. However, there is some debate on the efficacy of online counternarrative. Some actors believe that a lack of trust in the government makes people suspicious of such narrative, especially if it is backed by the government or a political party. Counternarrative works best if it resonates with its target audience and comes from credible actors in local communities. Thus, the pairing of narrative creators (tech companies, CSOs or governments) with creative actors can go a long way in holding the audience’s attention.
Many experts and activists believe that government involvement in the formulation of social-media or other cyberspace policies can muzzle freedom of speech and expression. However, some oppose this view. According to them, since the rights and freedoms enjoyed by people are eventually protected and upheld by the government machinery, the state’s involvement and feedback are crucial while formulating policies on online conduct. While efforts by tech companies, such as content takedown, do contribute, it is ultimately the state machinery that maintains and upholds the rule of law.
Social-media companies have consumers outside their geographical locations, which makes it imperative for them to engage with local CSOs, governments and the tech sector of other countries, to form region-specific online CVE (Countering Violent Extremism) policies. The task of social-media companies is twofold: to remove extremist content when required, combining artificial intelligence (AI) and human expertise, and to empower local organisations to tackle these issues at the grassroots level.
Machine learning has come a long way in identifying online threats. Until a few years ago, consumers of technology were reporting the bulk of extremist content online. Now, it is predominantly machines that identify such threats. However, the human element is irreplaceable, since machines do not understand nuance or context. Social-media and tech companies must come together to share their experiences and learn from each other, to identify and take action against groups that post various extremist content on different online platforms. This will also help improve the AI so that it can recognise the commonalities in content.
With bigger platforms becoming better at identifying and removing extremist content, a new development has been the migration of such content to less-regulated platforms or the dark web. Big tech companies are now combining their resources and expertise to reach out to smaller platforms and train them in identifying radical content. To this end, the tech giants, Facebook, Microsoft, Twitter, and YouTube, formed the, “Global Internet Forum to Counter Terrorism” (GIFCT), in June 2017. This platform aims to work in collaboration with tech companies, CSOs and governments, to disrupt the promotion of violent extremist propaganda on their platforms.
The role of governments is crucial in tackling insurgent ideologies, both online and offline. Non-state actors, too, play a significant part in the formation of governmental policies and in cooperation between state and nonstate actors. Currently, there is only a macro-level understanding of the online extremist space, with a dearth of evidence-based research on the multidimensional processes involved in extremist propaganda and the tools used for online recruitment of extremists. However, researchers and CSOs are making efforts to study the micro-level causes of extremism, to help policy and strategy builders.
Global notions of liberal democracy are changing, and it is important to re-visit them. Cyberspace is an integral part of society now, and the rights and freedoms linked to speech and expression must be updated keeping in mind the global and local context.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.