Introduction
On 15 March 2019, two consecutive attacks during Friday prayers at different mosques in Christchurch, New Zealand, killed 51 worshippers and injured 48 others. Brendan Tarrant, the 28-year-old gunman, broadcast the event live on Facebook as he opened fire.[1]
Due to the ambiguous nature of policies regarding the broadcast of ‘live’ videos, the Christchurch visuals remained on the platform for an hour before being taken down.
The incident quickly served as a catalyst for the international community—governments, social media and technology companies, and members of the academia, the media and civil society—to come together and create strategies to intercept and avert terror plots sown on digital media, while maintaining the individual rights of users. Two months after the incident, at a summit co-chaired by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron, the Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online[2] was formalised. While non-binding, the Christchurch Call outlines key commitments to be adhered to by signatory governments and online service providers, and presents a common foundation for collaborative work.
Rapidly advancing technology in the internet age has facilitated communication, provided an avenue for free expression, and facilitated the dissemination of ideas among people, governments, and international and multinational organisations.[3] At the same time, however, the internet has also been used by individuals and groups seeking to promote, propagate and incite terror and violence. Indeed, it has been argued that the widespread use of the internet has deepened linkages between propaganda and violent extremism.[4]
To be sure, there have been earlier inter-governmental collaborations targeting terrorist narratives online. These include the UN Security Council Resolution 2354;[5] the creation of the International Framework to Counter Terrorist Narratives;[6]the setting up of the European Union’s Internet Forum;[7] and the formulation of the Database on Cyberterrorism by the Council of Europe.[8],[9] Technology companies, for their part, have attempted to prevent the misuse of the internet and their own platforms, through the Global Internet Forum to Counter Terrorism (GIFCT).[a] Other initiatives like Tech Against Terrorism, the Aqaba Process, and the Global Counterterrorism Forum also aim to bring together the different platforms to counter violent extremism online.[10]
What sets the Christchurch Call apart is its aim to bring together governments and their affiliated agencies, tech platforms, internet companies, and civil society organisations within a clearly defined set of goals to prevent the use of the internet for terrorism. It makes itself a common platform for all stakeholders in efforts to address online violent extremism and its offline impact.
Past efforts have tended to assign blame on governments for being inconsistent and limiting in their strategies that centre around censoring tech-based apps or some of their features. Tech platforms, meanwhile, have often been accused of being averse to regulations,[11] non-cooperative and cryptic towards governments, and fomenting a trust deficit amongst constituent groups. Users are then left to navigate the impact of possibly subjective content moderation, in mild cases, and in extreme ones, internet shutdowns. The Christchurch Call to Action aims to provide states and tech companies an equal footing.
The Call also emphasises on ensuring the independence, freedom, and security of the internet and its digital citizens. As the internet increasingly dominates the everyday lives of large groups of people, rather than compromising on its open nature, the Call attempts to shield the internet from misuse by perpetrators of violence and terrorism.[12] The Call declares that it will respect and promote freedom of expression and does not desire to stifle free speech, while attempting to counter certain narratives. It highlights the impact of violent content available online on the “human rights of the victims, on our collective security and on people all over the world.” [13],[14]
Learnings from the Call
The Christchurch Call ensures equitable burden-sharing and the discharge of responsibilities amongst all stakeholders. Though appearing to be more utilitarian in their approach, the parties involved in drafting the Call were careful to incorporate a multi-faceted approach to ensure that the internet is safe from misuse in the hands of terrorists and violent extremists. The call strives to create a better, more holistic understanding of Countering Violent Extremism (CVE). This includes undertaking a model that promotes awareness and counteraction.
The Call progressively seeks to develop Artificial Intelligence (AI)-led tools and techniques to prevent violent content from being uploaded to social media platforms. Similarly, there is ongoing effort to ensure an increase in social media accountability and transparency surrounding the detection and removal of content. However, tech platforms must remain vigilant in ensuring that this does not lead to the arbitrary removal of content. After all, the power of counter-narratives resonates in analysing and understanding the nuances that drive dissemination of such online content. Therefore, methods of censorship like over-policing would only further widen the trust deficit between platforms and their users. For example, algorithms that guard against extremist content largely focus on ISIS and Al-Qaeda, and consequently on Muslims and Middle Eastern communities. This has resulted in the removal of Arabic content like the documentation of human rights violations in Syria.[15]
To evaluate the success of the Call, this brief outlines the lessons learnt in the one year since it was announced. It is necessary to map the progress made by the Call so far, keeping in mind its four-phase strategy to counter extremist content:
- Restructuring and updating the Global Internet Forum for Counter-Terrorism (GIFCT);
- Curating and creating a crisis response protocol;
- Understanding, mapping and analysing research (conducted or identifying the gaps) on violent extremism online;
- Identifying the role of algorithms in radicalisation online.[16]
The initial phase of the Call was centred around achieving the first two goals within a year of its implementation, and gradually proceeding with the remaining goals. The Call has met with success in its goal of removing barriers between companies, governments, civil society, and the tech community. In an unrivalled manner, it established and provided effective channels to efficiently allow coordination among governments and tech companies, thereby allowing quicker responses.[17]
One of the tasks accomplished by the Call was the reorganisation of the GIFCT from an industry-led cooperation to an ‘independent body’. The GIFCT was formed in 2017 by Facebook, Microsoft, Twitter and YouTube to allow for ‘knowledge-sharing, technical collaboration and shared research.’[18] These companies were also signatories to the Christchurch Call. In September 2019, four months after the Call was implemented, the GIFCT was made into an independent[19] organisation, capable of collaborating with civil society, government and multiple stakeholders rather than limited to being tech-based. The independent stature allowed for broadening of its mission statement and focus, reorganising goals and structures, and creating an advisory board.[20]
The consortium had previously worked with various governments and international organisations like the European Union. However, after the Christchurch attack, greater emphasis was placed on collaboration and ensuring quicker crisis management, with dedicated resources to mitigate situations as they arose. The independence of GIFCT has also allowed for enhancing governance structures and membership patterns—new companies, irrespective of their size and capacity, can now join the consortium. The reorganisation has allowed smaller companies to take advantage of shared resources and access other members of the consortium. This in turn has led to easier and more effective access to databases and algorithms to counter the spread of extremist and terrorist content. For instance, workshops are regularly conducted in cooperation with Tech Against Terrorism’s initiative of Knowledge Sharing Platform, including knowledge-sharing on combating the spread of violent extremist on the internet with more than 70 smaller tech companies.[21] The sharing of algorithms, methods, and databases with smaller platforms will be critical to the Call’s success.
After the Christchurch shooting, for instance, even as Facebook managed to take the video down from its platforms an hour after it was streamed, it remained available to watch on other sites such as 8chan for several hours afterwards. Within 24 hours of the shooting, 1.5 million attempts were made to re-upload the video onto Facebook.[22],[23] It is estimated that around 300,000 of these attempts were successful by accident and were later taken down by Facebook’s content team.[24]
In spite of the GIFCT’s new independent status, however, its transparency continues to be under scrutiny. Though the consortium released a transparency report in 2019 outlining its progress in conducting and funding research, building an in-depth taxonomy, and sharing content databases, questions have been raised about its transparency ambit and whether it will be better if it were to be held independently accountable and subject to evaluation. A mere assurance of the GIFCT’s progress in tackling digital terrorism and preventing the proliferation of violent content is not enough.
Similarly, there is uncertainty around the independent status of the GIFCT. After all, the governance of the GIFCT still resides with an industry-led operating board that consults with an independent advisory committee and a multi-stakeholder forum. The consortium is considered to be an ‘internet forum’; however, its membership, top management, and governance pattern is dominated by large social media companies.[25] Moreover, it appears that the GIFCT will still be financed by social media companies, although an executive director will lead fundraising efforts for particular projects. Such equating of the internet with social media services raises concerns of bias by larger media platforms, especially in judgments offered by GIFCT that may be adopted by governments to regulate the internet. Lastly, the GIFCT’s announcement that it will work with an independent advisory committee that includes government representatives raises new concerns that nation-states could misuse their involvement for political purposes.[26]
The GIFCT shares its database of images and extremist keywords used for filters, through the hash-sharing consortium that includes smaller companies that do not have similar resources to build their own databases. At the same time, extensive use of a particular algorithm could result in the proliferation and/or the multiplication of errors.[27] Although for content moderation, banning or shutting particular content, hashes and images seem the obvious choice, it also has the ability to further radicalise the already radicalised. To successfully counter perceptions and de-radicalise people, emphasis should be placed on effectively countering the narratives through videography, images and hashes.
Significantly, a crisis response protocol called the Content Incident Protocol (CIP) has been developed by GIFCT member companies to enable collaboration, research, and response between GIFCT members in the event of future terrorist attacks and to counter the spread of extremist content online across platforms.[28] It is a cross-sector initiative to avert the risk of increasing the virulence of a particular extremist content after a terrorist attack. It ensures the imposition of limitations to stop the streaming of live videos with violent and terrorist content, as well as limitations ensuring that these videos cannot be reposted elsewhere on the internet.[29] In the case of the October 2019 synagogue shooting in Halle, Germany, which was live-streamed on Amazon’s Twitch service, the footage remained available for public consumption for 30 minutes until the CIP was activated.
Subsequently, the consortium presented their statement detailing actions to mitigate the extremist content’s proliferation on the web. This included informing all members of GIFCT, the Government of Germany, and Europol of an active CIP underway. This was followed by uploading relevant hashes from the “attackers’ videos, its derivatives, and other related content into the shared GIFCT hash database with a dedicated label to enable quick identification and ingestion by GIFCT member companies.”[30] A stream of constant communications with the founding members of GIFCT and assessing risks and threats was maintained, along with coordination with the Situational Awareness Interval, enabling sharing of sensitive details of the CIP with affected parties.[31]
In December 2019, GIFCT conducted “test terror scenarios”[32] under a shared crisis response protocol in a series of closed-door workshops in New Zealand. Major tech companies associated with GIFCT claim that they were better equipped to handle an attack of a similar nature as that of Christchurch, are positioned with updated databases, and have enacted several measures to identify and remove extremist content from online platforms. However, this update and entailing progress has been communicated by GIFCT members themselves, and not by an external monitoring and auditing agency.
There are other weaknesses in the processes of the CIP—for instance, while the primary site on which the extremist content was shared might remove it and disallow its reposting, it does not necessarily ensure its complete eradication on other websites and platforms it might have migrated to. For instance, though the videos of Halle shooting were removed from Twitch, edited versions of the live video eventually surfaced on Telegram.[33]
Since its inception, the Call has focused on sanitising the internet from violent and extremist content by taking down or removing problematic material. This approach, however, can mirror many governments’ approach towards censorship and the suppression of legitimate critique or dissent online.[34] Moreover, such an approach bypasses key questions such as those related to the root causes of the proliferation of extremism online. Equal emphasis should be placed on reshaping the ecosystem that allows or encourages extremist narratives,[35]rather than only focusing on removing or blocking all extremist content.
Towards this end, the Call brings to attention the paucity of research and progress in mapping and understanding online extremism. As things are, most of current research focuses on devising means to counter online extremism. There is an imperative to establish and accept clear definitions for violent extremism to mitigate future occurrences.[36]
Algorithms play an essential role in how the internet works. An individual’s preferences and history of sites searched or visited, allow platforms to curate lists of people, things, and ideas these individuals may be inclined to associate with. Because of the power of algorithms, those who are, say, inclined towards or are sympathetic to acts of violence, terrorism, and extremism—find common spaces and echo chambers on the internet where they can share perspectives, thoughts, and ideologies.
Research conducted by the Royal United Service Institution, International Centre for Counter-Terrorism, and Swansea University on Radical Filter Bubbles, analysed certain social media platforms[b] based on how some of the aspects of their algorithms work.[37] A Twitter user, for instance, who had “followed” Jabhat Al-Nursa, an affiliate of the Al-Qaeda, would have been given suggestions for “people to follow” that included numerous other violent extremist accounts.[38]The same research found that if Twitter users from Southeast Asia “followed” the account of the Islamic State, then at least two sympathisers would appear on their list of “recommended friends”.[39] Along a similar vein, the study, ‘Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization’, has highlighted that not only do YouTube’s algorithms tend to promote violent content, the platform also lacks transparency in its algorithms.[40],[41],[42]
The online behavioural advertising model, which is based on promoting content based on users’ search history and prior preference, is believed to hinge on, and promote, “polarizing, violent or illegal content”[43] in order to amplify viewership.
It is crucial to generate more research in understanding the ways in which algorithms operate to work in support of extremism and terrorism, and how they divert users looking for such content. With tech and the role of algorithms being a point of focus for the Call, it should emphasise creating and publishing transparent policies of internet-based platforms with regards to content moderation. The Call should receive support and assistance from non-government and non-industry stakeholders such as NGOs, research organisations, and members of the academe—in generating research on the creation of algorithms allowing the narratives of countering violent extremism to flourish. It should place emphasis on better equipping online platforms for managing violent content and shedding the strategy of blanket shutdowns.
Identifying Problems and Gaps
The Christchurch Call seeks to counter violent extremism and terrorism online, but as observers have pointed out, the lack of a universally-accepted definition of these terms allows for a range of misinterpretations, miscommunication, and ambiguity, which could have adverse consequences for human rights and people’s freedoms across the world, including of speech and expression. The responsibility of defining these concepts should not be put squarely on states and governing authorities. Countries like Jordan and Spain, for example, who are both signatories to the Call, have been known to use anti-terrorism measures to curtail free speech.[44] The Call’s emphasis on the use of ‘upload filters’ in identifying potentially extremist content is an important aspect of what it seeks to achieve, but various civil society organisations have pointed out that these also run fundamentally against internet freedoms and the Call’s commitment to transparency and openness. This makes maintaining the delicate divide between security and online censorship crucial.[45],[46]
The GIFCT and the Call have both been criticised for its reductionist approach of equating the four founding companies—Microsoft, Facebook, Twitter and YouTube—with the entire “internet”.[47] Relying on private companies for outsourcing speech regulation responsibilities might make any of their suggestions and recommendations suspect.[48] Despite the pivotal role of private platforms in the Call, it refrains from commenting, moreso interjecting, on the business models of these companies.
Though the Call aims to ensure effective enforcement of applicable laws, due to its voluntary nature, it cannot hold the signatories liable should they fail to abide. Similarly, terminology frequently used in the Call, like “industry standards or voluntary frameworks”, “appropriate actions” or “capacity building activities”—remain loosely defined, leaving room for interpretations by stakeholders whose motive is not to keep the digital space safe from extremist content, but to curb free speech.[49]
Futhermore, although the Call boasts the involvement of multiple stakeholders, the involvement of CSOs, NGOs, academicians and journalists seems problematic. CSOs and other non-industry organisations were involved at a late stage (i.e., only a day before the announcement of the Call) and were excluded from the drafting process. The Call can also be criticised for failing to include organisations from the Global South[50] or other organisations whose mandate relates to issues of race relations.[51] Therefore, while claiming to be multi-stakeholder, a significant sector had been excluded until the end.
Conclusion
The Call should use narrower definitions for its conceptual framework.[52] For instance, it “urges action by online service providers”, but does not explicitly mention who is being referred to. In letter, the semantics may mean any one of related parties like social media platforms, internet infrastructure providers, or telecommunication carrier services,[53] which will then be forced to fall into its ambit often without context or relevance. As a corollary, while formulating policy proposals and regulation mechanisms, the Call should give government entities more involved roles while highlighting specific responsibilities and partners.
Meanwhile, the Call places heavy emphasis on ‘upload filters’ as a technical tool for content moderation. Although machines can help prevent the uploading or dissemination of violent and extremist content, some will require human involvement and interpretation to augment algorithm-driven automated filtering. More so, such filters, which have the ability to remove and censor other (non-extremist) material, could also be used to curtail freedom of speech, expression, privacy and the right to dissent.
It is also essential to ensure that legitimate discourse and due process involving states and governments are strengthened in the Call. At the moment, the Call places tremendous responsibility on social media platforms and internet companies. While there is no doubt that these entities have a fundamental role to play in stopping the spread of terrorist and extremist content on the internet, the responsibility of content moderation must not be outsourced entirely to private entities. After all, public institutions by nature hold greater accountability. Private companies can assist the governments in resolving complex, evidence-based issues governing the internet or content on their platforms, but cannot be the sole decision-makers.
Even as the intent in formulating the Call is exceptional, the measure of its success is if internet users have become more aware or dissuaded from spreading, or else consuming, violent content online.
About the Author
Priyal Pandey is a Junior Fellow at ORF.
Research Assistance for this paper was provided by Rutvi Zamre, Research Intern, ORF.
Endnotes
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.