Author : Jaibal Naduvath

Expert Speak Digital Frontiers
Published on Mar 19, 2024

The democratic world must defend against the designs of adversarial regimes intent on manipulating electoral outcomes by exploiting generative AI's vast capabilities

Flex, flux and the foreigner: AI and election interference

AI overhang in the 2024 elections

With nearly half of the world in election mode in 2024, including India, the European Union (EU) and the United States (US), the far-reaching impacts of Artificial Intelligence (AI), particularly generative AI, in infecting popular narratives and electoral outcomes have come into sharp focus. The potential of AI as both a decision-making and democratising tool is immense, from enhancing voter engagement and participation to facilitating cost-effective campaign planning, precise scenario simulations, and big data analysis. Yet, in the wrong hands, its staggering capabilities could unleash mayhem, spread falsehoods, sow confusion, swing votes and seal electoral fates. The recent AI-generated voice impersonation of US President Biden in a phone message to voters in New Hampshire urging them not to participate in that state's GOP primary election exemplifies these perils. These risks are increasingly weighing on the electorate, with a poll by the Artificial Intelligence Policy Institute showing that an overwhelming number of American voters believe that unregulated AI would likely lead to an accidental ‘catastrophic event’. World Economic Forum’s Global Risks Perception Survey 2023-2024 validated those fears globally, where over half of the respondents ranked AI-generated disinformation ‘as most likely to present a material crisis’ in 2024.

Unravelling the election interference chessboard

Amid the hustle and bustle at the hustings, one aspect will almost certainly make news this global election season—malign foreign interference. Russia was accused of interfering in the 2016 US elections, which many believe impacted electoral outcomes. In contrast to today, those were arguably simpler times, where conventional methods like e-mail hacking and social media engineering were used at scale to effect specific outcomes. However, advancements in AI have completely morphed the arena, significantly enhancing the capabilities of hostile agents to spread disinformation with little possibility of traceback. In democracies where elections tend to be particularly fractious, such malicious interventions could exacerbate divisions and tensions, exposing micro vulnerabilities and social fissures that they would seek to manipulate.

Customised content that reinforces biases could be used to target specific groups, leveraging the propensity of algorithms to present content tailored to individual preferences, further polarising the electorate.

AI tools that can sift through billions of bytes of data in the bat of an eyelid, convert them into actionable information, and engineer precise influence operations that exploit voter biases could be deployed at scale. Fabricated news, deepfakes, inorganic social media commentary, email swamping, and robocalls, among others, could be used to mobilise voters in particular ways. Customised content that reinforces biases could be used to target specific groups, leveraging the propensity of algorithms to present content tailored to individual preferences, further polarising the electorate. Disinformation swamping could also be used to infect messaging platforms such as Telegram and WhatsApp, which have become an important source of public information, earning them the ubiquitous sobriquet of ‘Universities’. Such swamping of manufactured narratives will take the focus away from issues that matter, to the agenda items of hostile agents, potentially gaming election processes at an unprecedented scale.

The pernicious agency of digital puppeteers 

Countries like Russia, China, North Korea, and Iran, often suspected of such hostile agency are believed to have honed their skills to infect the information environment to a degree hitherto thought impossible. Belfer Center’s National Cyber Power Index 2022 ranks China and Russia as the second and third most comprehensive cyber powers respectively, with China leading the index in terms of its use for surveillance. North Korea and Iran feature at positions seven and 10 respectively. Aiding these states in their machinations overseas would be the proficiency they have gained through surveilling and weaponising the predispositions and biases of their citizens. China, for instance, has an extensive multi-modal surveillance mechanism covering its almost 1.4 billion citizens, which is used to predict individual behaviours and pre-empt activities viewed as anti-state or anti-party. The rise of AI and the availability of advanced AI-based sorting tools, commonly known as ‘one person, one file’ software in local parlance, have enabled party arms and state security apparatuses to sort colossal amounts of data into precise and actionable information down to the individual. Today, China arguably possesses the capability to amass and analyse extensive micro-data from overseas, utilising a diverse arsenal that spans from large-scale data thefts and commercial brokers to dark web activities, community-embedded nodesindustrial espionage, and illicit backdoors in exported communication equipment, and sympathetic insiders. The proliferation of off-the-shelf generative AI tools and the limited capacity to check their misuse force multiply these efforts. Significantly, it is argued that China has prioritised science and technology education to build a mega talent pool for cyber operations.

The rise of AI and the availability of advanced AI-based sorting tools, commonly known as ‘one person, one file’ software in local parlance, have enabled party arms and state security apparatuses to sort colossal amounts of data into precise and actionable information down to the individual.

The AI duels spurring a Bourse rally

Given the huge stakes, 2024 will likely witness an unprecedented surge in the development and deployment of AI-based disinformation countermeasure tools and technologies. At its heart would be solutions to detect AI-generated content such as content IDs and concealed watermarks. However, a significant challenge arises as the evasive capabilities of generative AI tools outpace the development of forensic technologies to detect them, turning it into a perennial cat-and-mouse game. Evolving frameworks such as the adversarial learning-based AI text detector RADAR and real-time deepfake detector FakeCatcher seek to address this challenge. Still, their effectiveness remains to be proven in an ever-evolving tech landscape.

From MIT Lincoln Laboratory’s Reconnaissance of Influence Operations (RIO) system, which automatically detects ‘hostile influence narratives’ and their perpetrators across social-media networks, to proprietary tools from for-profits such as CyabraBlackbird.AI, and Alethea which buffer against disinformation, the information security market has exploded into an arena of immense interest in 2024. While expensive proprietary tools are often out of the reach of ordinary citizens, the race to counter disinformation has led to a bevy of easy-to-access AI-based tools to combat disinformation. AI browser extensions by Media Bias/ Fact Check, FakerFact, Logically, Hoaxly, and NewsGuard, among others, and services such as botbusters.ai enable consumers to detect disinformation with varying degrees of accuracy. Little surprise then that stock market watchers expect AI stocks to frontend the market rally in 2024. However, whether this burgeoning surge of innovation backed by a tsunami of market money establishes the foundation of a new era of digital sovereignty remains to be seen.

Balancing regulation and innovation

Deceptive propaganda threatens sovereignty and social order. In a scenario where regulating frontier technologies is difficult, and tech companies loathe to self-regulate, their recent pledge notwithstanding, states face an uphill task. The rise of technologies such as deepfakes, which blur the lines between fact and falsehood, necessitates robust guardrails to prevent their misuse by inimical entities. Governments are devising ways to combat the scourge through engagement with developers, legislation, and specialised agencies. The proposed Digital India Act,  envisions a comprehensive legislative framework against disinformation, and ethical use of online technologies. Singapore has enacted the Protection from Online Falsehoods and Manipulation Act, and the Foreign Interference (Countermeasures) Act to give legislative teeth to its countermeasure efforts. France and Sweden have set up dedicated agencies to fight fake news and external interference.

The key lies in crafting policies that focus on unravelling intent rather than clampdowns on algorithms, and elevating citizen awareness to develop resilience against disinformation.

However, amid the debate surrounding the potential misuse of AI, policymakers must adopt a nuanced and pro-innovation approach in engaging with a technology whose potential is enormous. Desperation should not lead to policy overreaches, nor should it lead to questionable approaches as exemplified by the decision of several US states to restrict the use of AI for campaign purposes. Such efforts lack sufficient detection and enforcement teeth and could stifle innovation. The key lies in crafting policies that focus on unravelling intent rather than clampdowns on algorithms, and elevating citizen awareness to develop resilience against disinformation. Singapore, for instance, has rolled out a national campaign to build media literacy against disinformation through its National Library Board. The global regulatory landscape is, however, still evolving. From voluntary good faith agreements with AI companies on a baseline of standards, to allowing sectoral regulators to regulate pieces of AI within their domain, to frameworks placing substantial obligations on AI companies, different countries are walking different paths. The European Union AI Act, the first comprehensive AI legislation globally, offers a template based on risk classifications. Among these, however, collaborative regulatory approaches have been demonstrated to spur the most innovation. Importantly, governments need to effect a mindset change by acting as equal partners to innovators and industry in disinformation mitigation, enabling and offering strategic direction beyond merely enforcing and retributing.

Beyond dystopia   

Two millennia ago, Octavian used disinformation to overcome Mark Antony and dismantle the second triumvirate, paving the way to his crowning as Rome’s first emperor. Transnational police states are the new Octavians. They seek to manipulate the information environment to create a world order of their liking, where they are the suzerains and their way, the common law. And this, while maintaining the façade of choice like the original antagonist. In an election year, they could intensify their efforts leveraging the potency of generative AI to perilous consequences. The democratic world needs to safeguard against the dark designs of these modern-day Octavians who seek to game legitimate democratic processes to their narrow ends. Yet, the debate surrounding their potential misuse of generative AI mustn't distract from its many benefits and lead to counterproductive policy interventions, for AI’s capability to strengthen participative processes and make them robust and resilient is unparalleled. Hence, as we walk the tightrope between generative AI's marvels and malfeasance this global election season, we must ensure that we are vigilant, not vagrant. And that is a very thin line.


Jaibal Naduvath is Vice President at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Jaibal Naduvath

Jaibal Naduvath

Jaibal is Vice President and Senior Fellow of the Observer Research Foundation (ORF), India’s premier think tank. His research focuses on issues of cross cultural ...

Read More +