Author : Danielle Cave

Issue BriefsPublished on Apr 12, 2021
ballistic missiles,Defense,Doctrine,North Korea,Nuclear,PLA,SLBM,Submarines

Defending Democracies from Disinformation and Cyber-Enabled Foreign Interference in the COVID-19 Era

The COVID-19 pandemic has caused unique societal stress as governments worldwide and their citizens have struggled to work together to contain the virus and mitigate its economic impact. This has been a trying time for democracies, testing the capacity of democratic governance to mobilise state and citizenry to work together. It has also tested the integrity of open information environments and the ability of these environments to deal with the overlapping challenges of disinformation, misinformation, election interference and cyber-enabled foreign interference.

This essay will analyse some of these emerging challenges while looking at both state and non-state actors manipulating the global information environment, with a particular focus on the Chinese state’s rapidly expanding interference and disinformation efforts. It also includes a series of recommendations for policymakers around the world.

COVID-19 has spurred us into a new era of disinformation where we can see the daily erosion of credible information. We are increasingly fighting for the value of facts. But the global information environment was home to bad-faith actors long before the pandemic hit, from states interfering overseas or misleading their own populations with targeted disinformation to conspiracy groups like QAnon and alt-right extremist groups. Some of these groups have leveraged legitimate public concerns related to, for example, the pandemic, vaccine rollouts and issues like data privacy to build new conspiracy theories, and COVID-19 has provided them with a bigger platform to do so.[1]

Relationships between governments and social media platforms are increasingly strained. Divisions are deepening about how to best balance free expression while dealing with the public harms caused by mis- and disinformation and speech that incites violence or hatred, and how to best tackle rapidly emerging issues such as the proliferation of manipulated content and the risks caused by increasingly sophisticated deep fake technologies that could mislead and erode trust in institutions.[2]

Policymakers are also increasingly frustrated[3] at seeing authoritarian states, particularly China and Russia, leverage US social media networks and search engines to project propaganda and disinformation to an increasingly global audience. [4] This is particularly perplexing, as the Chinese state, for instance, bans these same platforms at home and both limits access to and censors foreign embassy accounts on Chinese social media platforms.[5]

As governments, platforms and civil society debate on how to improve and manage the online ecosystem, democracies are left with a sub-optimal public square where misinformation runs rife, conspiracy theorists can build a global audience for their content, trolls can harass and intimidate with few consequences and where authoritarian states are exploiting western platforms to reach global audiences.

The current online ecosystem allows a range of state and non-state actors who are already manipulating the information environment for strategic gain to shift from exploiting the pandemic to disrupting and undermining trust in COVID-19 vaccine rollouts. This shift, which has already begun,[6] will only further tear at the already tense relationship between democratic governments and the major technology platforms. Anti-vaccination, conspiracy-infused misinformation disseminated across social media has mobilised public protest and opposition to vaccine programmes in several countries.[7],[8],[9] The January 6 riot in Washington DC, US, has demonstrated the potential for online mobilisation to transfer offline and manifest as violent civil unrest.[10]

This is not the public square we want or need, especially as the world seeks to return to some version of normalcy in 2021.

From Elections and State Actors to 24/7 Cyber-Enabled Interference

Governments have had more success at raising public awareness about online election interference than other forms of such intrusion. Since the Russian meddling in the 2016 US presidential election, democracies have been concerned that the legitimacy of their democratic mandate could be tarnished through election interference, although this preoccupation has distracted from authoritarian states’ broader efforts to shape the information environment in ways that secure strategic gains.

Research on cyber-enabled interference targeting electoral events has identified two predominant vectors.[11] First is cyber operations, such as denial of service attacks and phishing attacks, to disrupt voting infrastructure and/or to target electronic voting. Second are online information operations that attempt to exploit the digital presence of election campaigns, voters, politicians and journalists. Together, these two attack vectors have been used to seek to influence voters and their turnout at elections, manipulate the information environment and diminish public trust in democratic processes.[12]

Research has also identified 41 elections and seven referendums between January 2010 and October 2020, where cyber-enabled election interference was reported.[13] There has been a significant uptick in this activity since 2017, with Russia the most prolific state actor engaging in online interference, followed by China (whose cyber-enabled election interference activity has increased significantly since 2019), Iran and North Korea.

Following several high-profile incidents of election interference, there is now a proliferation of multi-stakeholder forums[14] designed to coalesce public and policy attention around malign online activity leading up to and during elections. But an exclusive focus on the interference that surrounds elections is problematic. Information operations and disinformation campaigns—that exploit the openness of the information space in democratic societies—are happening every day, across all social media platforms.

In the Philippines, researchers and investigative journalists have repeatedly shown how the Duterte administration has continued to rely on the influence-for-hire market of bloggers, trolls, inauthentic accounts and online influencers to create and promote pro-government content and help distract citizens from pressing policy issues such as the government’s handling of COVID-19.[15]  Social media platforms are showing a growing willingness to publicly attribute such activity. In September 2020, for example, Facebook linked the Philippines’ military and police to the removal of a network of domestically-focused fake accounts, consisting of 57 Facebook accounts, 31 pages and 20 Instagram accounts.[16]

But it is not just states that spread disinformation. News outlets, fringe media and conspiracy sites—some with significant global reach—are also guilty of deliberately misleading their audiences. For example, in December 2019, Facebook took down more than 800 accounts, pages and groups linked to conservative Falun Gong-affiliated media company The Epoch Times for misrepresentation and coordinated inauthentic behaviour.[17] Operated out of the US and Vietnam, this activity used a combination of fake and authentic accounts to generate content about US politics. Approximately 55 million accounts followed one or more of the now-removed pages, and the vast majority of followers were outside the US.

Governments that only shift their attention to these issues in the lead up to and during an election miss the bigger strategic picture as malign actors consistently target the fissures of societal cohesion in democratic societies. Some strategic actors have aspirations that are much more global than influencing an individual country’s election outcome. While governments have spent the last few years (re)building their counter-foreign interference capabilities, fighting cyber-enabled foreign interference outside of election time provides a different set of complicated challenges—from online attribution and enforcement, to protecting citizens from harassment and threats from foreign actors[18]—that they are struggling to handle. One issue is that, unlike traditional foreign interference, the responsibility for action is distributed across the platforms and different government agencies. In many countries, unless there is an election day to focus on, government leadership has fallen mainly down the cracks between intelligence, policing and policy agencies.

Chinese State’s Flourishing Interference and Disinformation Efforts

Given authoritarian regimes’ limited capacity to absorb social unrest peacefully, including in cyberspace, the pandemic has threatened stability. The emergence of the pandemic from Wuhan, China, created the risk of domestic political instability for the Chinese Communist Party (CCP). The party-state’s international standing was endangered by the spread of the virus and the resulting global economic disruption.

So how did the CCP respond to this challenge as COVID-19 spread? It threw itself into a battle of information and narratives, much of which played out online and continues to evolve today. At home, the state suppressed and censored information about the virus. Open-source researchers and citizen journalists in China who had been collecting and arching online material at risk from the Cyberspace Administration (China’s Internet censor), including through the Microsoft-owned GitHub platform, were detained and their online projects shuttered.[19]

China’s censors also sent thousands of confidential directives to media outlets and propaganda workers, curated and amended trending topics pages, and activated legions of fake online commentators to flood social sites with distracting commentary.[20] One directive from the Cyberspace Administration said the agency should control messages within China and seek to “actively influence international opinion”[21].

The effort to influence international opinion, which remains ongoing, relied on a very different toolkit to the one wielded at home. US social media networks were central, providing the ideal platform for China’s ‘wolf warrior’ diplomats, state media, pro-CCP trolls and influencers, and inauthentic accounts to boost and project the CCP’s narratives, including disinformation about where the virus originated. They also provided the perfect space for this collective of online actors to undermine critical reporting from western media, research institutes and NGOs, and smear and harass researchers and journalists, whose work provided facts and analysis in stark contrast to the propaganda being globally by the CCP.[22]

The Chinese state’s large-scale pivot to conducting information and disinformation operations on US platforms occurred across 2019 and 2020.[23] As the pandemic spread, the Chinese state found itself ideally positioned to experiment with cross-platform and multi-language information activity that targeted overseas audiences and largely amplified pre-existing COVID-19 related and political narratives.[24] But the Chinese state’s current efforts lack the sophistication of other states that engage in this online behaviour, such as Russia. For example, there is little effort to cultivate and invest in rich, detailed online personas, and a lack of linguistic and cultural nuance needed to build credible, fake influence networks. However, the Chinese state’s information efforts are persistent. While they may have focused on quantity over quality thus far, given the enormous resourcing they can bring to developing this new online capability, quick improvement can be expected. The Chinese state also brings an alignment in tactics and coordination—between diplomatic messaging, state media, pro-CCP trolls and influencers, the co-option of foreign fringe media and the deployment of inauthentic online activity—that no other state can match.

Playing Stronger Defence and Models for Collaboration

COVID-19 and the CCP’s efforts to control and shape international information flows about the pandemic through the online projection of propaganda and disinformation has made clear just how easy it is for malign actors to manipulate open information environments.

Harder choices will have to be made about how to protect our information ecosystems better and how to deter and impose costs on the many malign actors seeking to exploit it. This will require governments to work more closely with the platforms and civil society groups to ‘defend forward’ to counter and disrupt malicious information activity. There is also a lucrative market of commercial online influence-for-hire service providers, to which state actors can outsource propaganda distribution and influence campaigns to obfuscate their activities. These commercial actors are increasingly part of the fabric of political campaigning in many countries. However, the lack of transparency around these activities risks corrupting the quality of democracy in the environments in which they operate.

Globalisation and the openness of democracies make these acute challenges as that openness has left democratic states vulnerable to threats of interference and subversion. Much of the thinking around cyber-enabled foreign interference has been framed by Russian meddling in the 2016 US presidential election, yet other strategic actors are able to exploit disinformation campaigns and information operations in powerful combination with other levers of state power. China, for instance, has interwoven disinformation within its diplomatic and economic coercion of Australia in retaliation for the Australian government’s call for an independent international enquiry into the origins of the COVID-19 pandemic[25].

Given the cross-cutting nature of this challenge, diplomacy and policy are fundamental to pulling together like-minded countries to engage with and contest cyber-enabled foreign interference and the actors—state and non-state—who spread disinformation for strategic gain. Social media platforms have been a front-line in this battlefield, and it is often the platforms that must detect and enforce against state-linked information operations and disinformation campaigns that exploit their users. Yet, the platforms are driven by different motivations from national governments. Multilateral and multi-stakeholder approaches must be encouraged to facilitate the defence of democracy as a system of governance and values. This is particularly important in the arc of states from Japan down through Southeast Asia to India, which contain fast-growing economies yet many fragile democracies, and where the Chinese state’s power projection has the potential to influence a long-term drift away from democratic governance.

There are models for collaboration between states in pushing back against interference that attempts to transform democratic openness into vulnerability. The European Centre of Excellence in Countering Hybrid Threats[26] draws together expertise from across the European Union and North Atlantic Treaty Organisation (NATO) to facilitate strategic dialogue on responding to hybrid threats, developing best practice, building capacity through training, and professional development and joint exercises. NATO Stratcom[27] is another centre of excellence that combines both strategic and tactical expertise from across the alliance in collective defence against disinformation and information operations.

These models could be replicated through the mechanism of the Quadrilateral Dialogue (Quad) between Australia, India, US and Japan. The contemporary significance of the Quad for both Australia and India is that it provides members with the collective capacity to balance China’s influence in the Indo-Pacific. This alignment of interests could provide an important vehicle for building structures like those that have been trialled elsewhere and offer resilience against cyber-enabled foreign interference. This should include multi-stakeholder engagement that mitigates against the splintering of economic and national security interests; capacity building around the detection, strategic communication and digital diplomacy; thresholds for collective cost raising; and scenario-planning to stress-test the capacity of these structures to withstand the threat.

Recommendations

  • Invest in civil society: Civil society is fundamental to deterrence, and capacity building is imperative in developing whole-of-society structures that can generate societal resilience to foreign interference. Media literacy, independent research and investigative journalism assist an informed public in resisting the sway of disinformation. Independent civil society bodies that monitor elections and other electoral events should develop capabilities that provide transparency in the information environment. The digital public sphere shapes political discourse in modern democracies and deserves the protection that governments alone cannot provide.
  • Strength in democratic collectives: Governments themselves can take steps to mitigate the risks of cyber-enabled foreign interference. Collectively, democracies can band together to attribute, cost raise and deter interference by other states. This collective strength will deter by imposing costs and shaping behaviour. States targeted individually may be reluctant to escalate grey-zone aggression. However, where there is a collective response, adversaries are likely to recalibrate their behaviour in the face of collective actions such as diplomatic activities or economic sanctions.
  • Platforms must deter and punish: Social media networks need to work harder to deter and punish actors that actively spread disinformation on their platforms. The platforms should introduce additional categories into their reporting functions that allow users to report mis- or disinformation. They should also suspend accounts and networks that are repeat offenders.
  • Protect online news: Search engines must focus more effort on stripping their mainstream products of propaganda and disinformation. Google News, for example, features Chinese and Russian state media, as well as Chinese state press conferences. But these are not credible news sources, and their reporting regularly contains disinformation. Online news portals should conduct an urgent audit of what they categorise and promote globally as ‘news’.
  • The Quad: Multilateral structures like the Quad can act as powerful levers of deterrence, with the potential to shape behaviour in the Indo-Pacific. The Quad could provide a structure beneath which frameworks for threat detection and intelligence sharing could be developed. The shared interests of this strategic grouping could provide a vehicle for the development of agreed norms, on the boundary between influence (which all governments undertake) and interference, thresholds for a collective response to cyber-enabled interference and deterrence measures, and capacity building on detection and disruption. The Quad could establish a Centre of Excellence in countering hybrid threats and political warfare, drawing on the substantial expertise available across the US, Australia, India and Japan in developing collective solutions, structures and responses to threats like cyber-enabled foreign interference.
  • Prioritise multi-stakeholder engagement: Multi-stakeholder engagement between governments, civil society and industry offers a similar framework for driving cooperation to mitigate risk. As the platforms act as the enforcer, where social media is the vector of interference, transparency around enforcement actions is essential to public trust. Currently, the platforms provide a range of transparency reporting practices. Increasingly, governments are pushing for mechanisms to ensure comparative reporting practices and metrics can be used to evaluate the performance of the platforms in this area. Voluntary codes of practice on disinformation have been implemented in the EU and Australia, although their usefulness remains to be seen. Governments will apply some scrutiny to how well these voluntary mechanisms perform before imposing regulation. There are significant challenges in the development of regulatory engines of transparency. Transparency reporting that captures metrics on the volume and reach of state-linked inauthentic networks are important, but these are not the only vectors for foreign interference via social media. Algorithms drive how content is surfaced to platform users and they can be designed to suppress certain types of content, users or issues, and to amplify others in ways that are not conducive to democratic principles, freedom of expression or free and fair elections. Algorithmic transparency is, therefore, another important principle in scrutinising how platform manipulation serves as cyber-enabled foreign interference.

[1] Elise Thomas and Albert Zhang, ID2020, Bill Gates and the Mark of the Beast: how Covid-19 catalyses existing online conspiracy movements (Australia: ASPI International Cyber Policy Centre, 2020).

[2] Hannah Smith and Katherine Mansted, “Weaponised deep fakes – National security and democracy, Australian Strategic Policy Institute, 29 April 2020.

[3] Chris Mills Rodrigo, “Twitter comes under fire over Chinese disinformation on coronavirus, The Hill, 25 March 2020.

[4] Covid-19 Disinformation & Social Media Manipulation, Australian Strategic Policy Institute, 27 October 2020; Jake Wallis, “Twitter data shows China using fake accounts to spread propaganda, The Strategist, ASPI, 12 June 2020; Danielle Cave, Twitter, 26 February 2020.

[5] Daria Impiombato, “‘Page not found’: what happens when diplomatic statements meet the WeChat censor, ASPIA, 23 September 2020; Fergus Ryan, Audrey Fritz and Daria Impiombato, “TikTok and WeChat: Curating and controlling global information flows, ASPI, 8 September 2020.

[6]Ariel Bogle and Albert Zhang, “Chinese and Russian influence campaigns risk undermining Covid-19 vaccination programs, The Strategist, ASPI, 22 January 2021;  Elise Thomas, “Covid-19 disinformation campaigns shift focus to vaccines, The Strategist, ASPI, 24 August 2020.

[7] Ariel Bogle, Michael Workman and Stephen Hutcheon, “How coronavirus ‘changes the game’ for the anti-vaccination movement, ABC News, 31 May 2020.

[8] Anita Chabria, “Anti-vaccine and alt-right groups team up to stoke fears of COVID-19 vaccine, Los Angeles Times, 18 December 2020.

[9] William Wallis, “How anti-vaxxers are threatening the UK’s Covid programme,” Financial Times, 30 November 2020.

[10] James Purtill, “Weeks of rhetoric online made the January 6 storming of the Capitol ‘entirely predictable’, experts say, ABC News, 8 January 2021.

[11] Sarah O’Connor et al., “Cyber-enabled foreign interference in elections and referendums, ASPI, 28 October 2020.

[12] O’Connor et al., “Cyber-enabled foreign interference in elections and referendums”

[13] O’Connor et al., “Cyber-enabled foreign interference in elections and referendums”

[14] David Salvo and Heidi Tworek, “Paris Call Community for Countering Election Interference: What Democracies Can Learn from the Government of Canada, Alliance for Securing Democracy, 26 May 2020.

[15] Shashank Bengali and Evan Halper, “Troll armies, a growth industry in the Philippines, may soon be coming to an election near you, Los Angeles Times, 19 November 2019; Camille Elemia and Gelo Gonzales, “Stars, influencers get paid to boost Duterte propaganda, fake news, Rappler, 27 February 2021; Coda Story and Lynzy Billing, “Duterte’s troll armies drown out COVID-19 dissent in the Philippines, Rappler, 22 July 2020.

[16] Nathaniel Gleicher, “Removing Coordinated Inauthentic Behavior, Facebook, 22 September 2020.

[17] Gleicher, “Removing Coordinated Inauthentic Behavior”

[18] China’s grubby cyber hit on Aussie researcher, Daily Telegraph, 16 January 2021.

[19] Danielle Cave, “Data driven: How Covid-19 and cyberspace are changing spycraft, Australian Foreign Affairs.

[20] Raymond Zhong et al., “No ‘Negative’ News: How China Censored the Coronavirus, New York Times, 19 December 2020.

[21] Zhong et al., “No ‘Negative’ News”

[22] Jacob Wallis and Albert Zhang, “Trigger warning. The CCP’s coordinated information effort to discredit the BBC, ASPI, 4 March 2021.

[23] Jacob Wallis et al., “Retweeting through the Great Firewall, ASPI, 12 June 2020.

[24] “Covid-19 Disinformation & Social Media Manipulation”; Wallis et al., “Retweeting through the Great Firewall”

[25] Ashley Townshend, “China’s Pandemic-Fuelled Standoff with Australia, United States Studies Centre, 20 May 2020.

[26] The European Centre for Countering Hybrid Threats, “Themes,” Hybrid CoE.

[27] NATO Strategic Communications Centre of Excellence, “Home,” NATO Strategic Communications Centre of Excellence.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.