Expert Speak Digital Frontiers
Published on Aug 25, 2023

The adoption of generative AI in cognitive warfare poses a significant threat to national security

Deep fake, disinformation, and deception

Earlier this year, in May, a fake image showing an explosion near the Pentagon, headquarters of the US Department of Defense in Washington DC, was widely shared on social media accounts, including some verified ones. The image, accompanied by a report, was even broadcast on several Indian mainstream TV news channels. It was later revealed that the image was probably created by generative artificial intelligence (AI) that can produce realistic content like text, imagery, audio, and synthetic data. However, this is not the first time generative AI has made a splash.

Particularly concerning the use of generative AI is deep fake, which can create realistic-looking videos by replicating human facial features and expressions. In 2019, a deep fake video of Gabon’s President, Ali Bongo, sparked concerns about his fitness to rule and convinced the African country’s military to launch a coup. It failed, but the deep fake video demonstrated the potential of AI for disinformation and its grave implications for national security and law enforcement.

The ongoing Russia-Ukraine conflict has become an active ground of deep fake experimentation and execution. In March 2022, a deep fake video of Ukrainian President Volodymyr Zelenskyy asking his troops to surrender went viral. However, the sophistication levels of the video were not too advanced, making it easier to decipher it as fake. Likewise, a deep fake video of Russian President Vladimir Putin urging his troops to lay down their weapons and go home had also gone viral on Twitter (now ‘X’). In the absence of credible reporting from the ground, such deep fake videos caused chaos and confusion for citizens on both sides. They also spread confusion and uncertainties around military operations.

In the absence of credible reporting from the ground, such deep fake videos caused chaos and confusion for citizens on both sides. They also spread confusion and uncertainties around military operations.

Technological boost for disinformation

Disinformation, or deliberate circulation of false information and rumours aimed at influencing public opinion or obscuring the truth, has emerged as a significant challenge, particularly for contemporary democratic societies. Besides its domestic utilisation by political actors to advance their political agenda, adversarial states have also leveraged it as a major component of their ‘hybrid warfare’ or ‘grey zone tactics.’ The European Union has elaborated this phenomenon as ‘foreign information manipulation interference’ (FINI), emphasising the ‘end’—manipulative behaviour, rather than the ‘means’—the truthfulness of the delivered content.

Recent technological advancements in AI, such as deep fakes and language model-based chatbots like Chat GPT, have helped amplify disinformation. Beyond the tech community, these tools are already becoming accessible to ordinary internet users. This has enabled conspiracy theorists and spreaders of disinformation to produce false content and misleading narratives quickly and inexpensively. Europol estimates that by 2026, 90 percent of online content will be AI-generated and manipulated.

Recent technological advancements in AI, such as deep fakes and language model-based chatbots like Chat GPT, have helped amplify disinformation. Beyond the tech community, these tools are already becoming accessible to ordinary internet users.

The emergence of deep fake content has altered the dynamics of the information landscape by crowding it and subduing the flow of authentic information. This can lead to what experts term as “Liar’s Dividend”. Growing awareness about deep fakes would make a sceptical population question the authenticity of a real video, making it easier for the culprits to dodge justice. The perpetrators of the 6 January 2021 Capitol Hill riots tried to take such an advantage during legal proceedings by sighting the video evidence as AI generated but to no avail. With a wider reach through social media, deep fakes can affect situational awareness and imperil decision-making, particularly during crisis periods. Unsurprisingly, AI researchers have already called for a halt to developing advanced AI systems.

China’s lead on AI-enabled disinformation

Authoritarian regimes have particularly leveraged disinformation to target democracies. As expected, the People’s Republic of China (PRC) has taken the lead in using AI-based tools to expand its propaganda campaigns.

Over the last few years, China has invested heavily in AI, intending to be the world leader in the domain by 2030 and subsequently incorporate it into the military domain. In its quest to subjugate the enemy without the use of force, China has developed the concept of ‘Intelligentised warfare’, which targets the adversary’s cognitive ability. Deep fake technology strengthens this goal by integration into military doctrines. Tsai Ming-yen, the Director General of Taiwan’s National Security Bureau, has already flagged concerns about the potential use of deep fakes by the Chinese Communist Party (CCP) to sow seeds of chaos in Taiwan as a part of ‘cognitive warfare’.

A prominent example of China’s use of deep fake-enabled disinformation has been using AI-generated news anchors to promote CCP interests and anti-US propaganda. In two videos that appeared last year under the banner of ‘‘Wolf News’, these anchors discussed issues like the US government’s allegedly flimsy response to domestic gun violence and the importance of positive outcomes of a US-China heads of state summit. These anchors were generated through an AI video solution provided by a British firm, Synthesia. While the videos did not generate much traction online, they underlined CCP’s misuse of commercially available AI video-generation tools for disinformation purposes.

Relevance for India

India has not yet seen a deep fake video produced by China or CCP-linked elements. However, it may be just a matter of time.

Anti-India propaganda from China and People’s Liberation Army (PLA) has surged since the Galwan Valley clash in June 2020. In particular, Twitter has emerged as a preferred tool for China-linked elements to push this propaganda focused on pedalling misleading reports related to the Galwan clash, aggressively pressing its territorial claims as well as disputing India’s military preparedness. In some cases, their efforts were boosted by the Pakistani Twitter trolls, who amplified this propaganda in their own networks. Most recently, ahead of the third anniversary of the Galwan clash, Chinese handles posted graphic images and videos in an attempt to show the Indian Army in poor light. 

Moreover, a US-based data analytics firm, New Kite Data Labs, has recently claimed that a Beijing-based private AI firm, Speech Ocean, which has clients with ties to the PLA, has been collecting voice samples from India, primarily from sensitive border regions of Punjab and Jammu and Kashmir. The firm adds that locals have been hired to record pre-scripted words, phrases, or conversations, which are then transferred to China-based servers. While the exact purpose of this data harvesting remains unknown, it can point towards the potential use of voice samples for machine learning for deep fakes, which could then be used for propaganda. Indian policymakers must remain alert to this possibility, given China’s record in anti-India propaganda.

At a time when physical warfare is loathed, grey-zone tactics like propaganda and disinformation are being employed to derive strategic advantage. The weaponisation of information and the spread of disinformation are critical to subjugating an adversary’s population without the use of force. Technological advancements like generative AI and deep fake have only abetted these tactics. They have opened a pandora’s box for the national security establishment with consequences far beyond the present-day imagination. India must take a proactive approach to tackling this menace.


Sameer Patil is a Senior Fellow at Observer Research Foundation

Shourya Gouri is an intern with the Observer Research Foundation

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Sameer Patil

Sameer Patil

Dr Sameer Patil is Director, Centre for Security, Strategy and Technology at the Observer Research Foundation.  His work focuses on the intersection of technology and national ...

Read More +
SHOURYA GORI

SHOURYA GORI

Read More +