-
CENTRES
Progammes & Centres
Location
India, like most democracies, now faces an evolved disinformation threat—less about mass manipulation and more about AI-powered, hyper-personalised influence. Propaganda is no longer broadcast; it’s a whisper tailored to one person at a time.
Image Source: Getty
With Generative Artificial Intelligence (Gen-AI) now capable of crafting linguistically, geographically, and psychologically customised content, it has now become possible to tailor propaganda down to the individual, where every message is fine-tuned to the recipient’s identity, belief system, emotional state, and cultural context. Mass messaging will soon become obsolete; instead, influence will be delivered as a whisper, algorithmically crafted just for their target.
Blurring the line between marketing personalisation and information warfare, AI will “increase the speed, scale, and personalisation” of disinformation campaigns beyond anything seen before.
Through machine learning, AI can precisely target individuals by exploiting the data-rich environment of social media and online presence. Unlike the one-size-fits-all propaganda of the past, hyperpersonalised influence uses granular data (every click, like, or location ping) as ammunition to generate algorithmically customised messages. Blurring the line between marketing personalisation and information warfare, AI will “increase the speed, scale, and personalisation” of disinformation campaigns beyond anything seen before. Furthermore, while numerous use cases exist, this phenomenon is particularly relevant for military and electoral applications.
Trained on vast troves of text to generate human-like language—new age Large Language Models (LLMs) such as Claude and GPT-4—enable automated, high-volume personalisation at scale for an insignificant cost, allowing the mass production of highly persuasive albeit subtly varied propaganda.
A key demonstration of these abilities is the infamous Claude study, which created 100 'human' personas, orchestrating sophisticated influence campaigns across various social media platforms throughout Europe, the Middle East, and Africa. It designed bespoke narratives for several purposes, such as supporting certain Albanian politicians, criticising the European business landscape while promoting Dubai, and advocating development initiatives in Kenya. What was more alarming was that the algorithm self-optimised for relationship building over virality and covert integration over breakout moments to gain trust and appear more human.
Another layer of this mass-manipulation machine is AI’s ability to conduct comprehensive, near-simultaneous sentiment analyses based on facial expressions, speech prosody, and spoken or written content, which can exploit emotional vulnerabilities and amplify existing sentiments.
AI can infer an individual’s personality or political leanings by mining and analysing social media data, activities, and even linguistic style. This essentially allows profiling capabilities to move beyond demographic factors such as age and gender, to exploit how a person thinks and feels. This happened in 2018 in the case of Cambridge Analytica. However, major advancements in data processing capabilities and generative AI capacity have made profiling and targeting far more precise and scalable. The ability to merge location data with personal profiles is helping to craft disinformation that references a recipient’s hometown, local news, or nearby events, making the propaganda more relevant and credible. This fusion of personal and local tailoring strengthens the influence operation’s efficacy in eroding trust or inciting action.
Another layer of this mass-manipulation machine is AI’s ability to conduct comprehensive, near-simultaneous sentiment analyses based on facial expressions, speech prosody, and spoken or written content, which can exploit emotional vulnerabilities and amplify existing sentiments. For example, if sentiment analysis reveals a subgroup anxious about a particular crisis, an influence campaign can target them with calming misinformation that redirects blame, or conversely, with content that heightens their fear to induce mistrust.
Considering the technology’s potential, state and malicious non-state actors alike have shown interest in deploying it to fulfil their objectives.
A notable use case is AI’s ability to craft precision ‘one-to-one’ influence attacks that can be used defensively and offensively in military contexts. China’s People’s Liberation Army (PLA) terms this phenomenon as ‘precision cognitive attacks’. This involves targeting key decision-makers in adversarial countries such as Taiwan and creating ‘propaganda amplifiers’ based on their beliefs, cognitive orientation, emotional systems, and behavioural tendencies. These amplifiers would allow them to curate and generate truthful information and disinformation to influence them to further Beijing’s interests potentially.
However, China is not the only country exploring the potential of this technology. A rather rudimentary example of this is the Russia-Ukraine War, where Russian propagandists created deepfake videos of Ukrainian President Volodymyr Zelenskyy instructing his soldiers to surrender. While deepfakes like this are seldom trusted, hyperpersonalisation will arm them to exploit the aforesaid information cocoons to disinformation peddlers’ advantage, exacerbating their potency. Similarly, Ghost Machine—developed by the United States (US) Army’s Special Operations Forces—has demonstrated that with merely 30 seconds of audio, it is now possible to create personalised AI voice clones of enemy forces’ commanders/ family members. These can be used to urge them to defect or surrender, making it possible to demoralise and deceive enemies more convincingly and faster than ever before. Such personalised deception was largely theoretical in the past; AI is quickly making it technically feasible.
While deepfakes like this are seldom trusted, hyperpersonalisation will arm them to exploit the aforesaid information cocoons to disinformation peddlers’ advantage, exacerbating their potency.
In the political realm, hyperpersonalisation will enable disinformation campaigns to swing electoral outcomes. Analysts warn that AI systems can generate tailored propaganda targeting key demographics or locales at critical moments, causing greater political polarisation and essentially removing the undecided voter from the equation, increasing confusion and fuelling cynicism.
Since the messaging can be both massive in scale, yet micro-targeted in content, it becomes harder for defenders to monitor or counter. Each voter may see a different manipulated and customised message. For instance, one sees an AI-fabricated news story playing to their economic anxieties, while their neighbour sees a deepfake video appealing to social grievances. These precision-targeted influence operations could “achieve political goals of corroding, infiltrating, subverting from inside [the target country]” via an “influence machine.” AI allows propagandists to conduct large-scale psychological warfare at the individual level, eroding societal cohesion. Personalised Propaganda Campaigns are particularly worrisome for democracies, since nefarious actors could utilise them to consistently undermine electoral processes and public discourse at a systemic level.
While hyperpersonalised influence campaigns have enormous disruptive potential, operational hurdles persist.
Tweaking a message according to a person’s profile may not guarantee a dramatically stronger influence outcome in every case. Human psychology is complex, and individuals may still reject or overlook even perfectly tailored falsehoods. This suggests that hyperpersonalisation is not a silver bullet for propaganda, at least not yet. Its effectiveness is contingent on context and the quality of execution.
Another significant limitation is its demand for vast amounts of personal data. Algorithms can only customise content if they have detailed information about each target. While such data is increasingly available—through social media, data breaches, or data brokers—its access is limited, and procurement requires human intervention and access to back-channel networks.
Moreover, strong data protection measures can help mitigate the risks of such manipulation by starving propagandists of the raw personal data they need. Although 144 countries—i.e., 82 percent of the world’s population—are covered by some sort of data protection framework, legal systems continue to woefully lag behind the breakneck pace of technological evolution.
State actors such as China are actively exploring the integration of hyperpersonalised influence campaigns as a serious strategy.
While rules against malicious use of publicly available generative AI exist, techniques such as prompt injections and jailbreaking make it possible to drastically alter AI behaviour and skirt AI organisations’ internal policies.
Ultimately, hyperpersonalised influence campaigns are neither completely autonomous nor cost-free; they demand resources, sustained coordination, and iterative testing to be effective. State actors such as China are actively exploring the integration of hyperpersonalised influence campaigns as a serious strategy. However, most other state actors may find the costs and technical overheads too prohibitive.
Hyperpersonalisation is not merely a refinement of existing operations—it represents a structural shift in how information is tailored, weaponised, and delivered. Its promise lies in precision, scale, and automation; its danger lies in subtlety, fragmentation, and erosion of shared reality. But its success is far from guaranteed. AI systems still require curated data, well-structured prompts, and oversight to maintain narrative coherence and avoid detection.
At present, hyperpersonalised influence offers clear advantages to sophisticated state actors and well-resourced firms, but remains constrained by technical bottlenecks, regulatory frictions, and human unpredictability. Nonetheless, the trajectory is clear: as generative models become more autonomous and surveillance datasets more granular, the line between persuasion and manipulation will blur further.
Countries must implement widespread digital provenance strategies and authenticity-by-design principles by employing structures such as the Starling Framework to protect information integrity.
For liberal societies and open platforms, the challenge will be to detect, disrupt, and devalue these efforts before they scale. Countries must implement widespread digital provenance strategies and authenticity-by-design principles by employing structures such as the Starling Framework to protect information integrity. Meanwhile, implementing a media education campaign modelled on Singapore’s approach is vital to strengthen citizen resilience and data literacy. There must also be collaborative regulatory harmonisation on AI-governance by like-minded governments to enhance safety, quality, trust and interoperability across geographies and sectors.
In conclusion, hyperpersonalisation is likely the next frontier of influence operations; however, its ultimate potency will depend on the evolving interplay between the attackers’ innovations and the defenders’ responses. The race is afoot, and the outcome will shape the future information environment for democracies worldwide.
Sahil Sonalkar is a Research Intern at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.