Expert Speak Raisina Debates
Published on Apr 03, 2026

Generative AI has dismantled the older constraints on the production and distribution of disinformation. We are entering a new era of synthetic narratives, deepfakes, and cognitive warfare.

Algorithms of Falsehood: The Challenges of Governing AI-Generated Disinformation

Generative AI has removed the old constraints on disinformation: human effort, limited distribution, and high coordination costs. It enables mass content generation at low cost, compresses the time between creation and impact, and allows a small set of actors to operate at strategic scale. This article examines how AI-driven disinformation challenges existing legal frameworks and suggests an AI governance model built on the principles of trust and safety.

NewsGuard reports that the number of AI-generated news sites has ballooned to over 2,089 sites across 16 languages. These sites operate with almost no human oversight. In August 2025, leading chatbots relayed false claims 35 percent of the time, up from 18 percent a year earlier. Platforms routinely amplify what drives engagement over what is true.

AI Content Metrics 2024 Performance 2025 Performance
Chatbot Falsehood Rate 18% 35%
AI-Generated News Sites ~600 2,089+
Web Fraud Growth (since 2021) - 1,600%

Source: NewsGuard AI Monitor and Entrust Identity Fraud Report.

The speed of this shift is difficult to grasp. Deepfake attacks occurred every five minutes in 2024. Digital document forgery rose by 244 percent in a single year. We are seeing the erosion of the “seeing is believing” standard, which is now morphing into the “liar’s dividend.” A politician caught in a scandal on video can now simply claim it as a deepfake.

The threat operates on two registers: national security, where synthetic content is used to distort ground realities during crises, and personal dignity, where AI-generated disinformation is weaponised against individuals.

The GoLaxy revelations of September 2025 provided a window into this world. Documents leaked from the Beijing-based firm GoLaxy showed a "Smart Propaganda System". This is an army of AI personas that are engineered to look and think like real people. These personas use millions of data points to build psychological profiles of their targets. They adapt and win trust. One dossier showed that the system targeted 2,000 public figures and 117 members of the US Congress. It uses “LLM grooming” to saturate search results with biased data, ensuring that when a target searches for a topic, the top results confirm the fabricated narrative. This is cognitive warfare.

The Indian Context

India is one of the countries most exposed to AI-generated disinformation. 47 percent of Indian adults have encountered AI voice-cloning or deepfake scams, nearly double the global average of 25 percent. India's leading fact-checkers recorded a sharp escalation in 2025. AI-generated content accounted for over 20 percent of all debunked material, more than double the figure from the previous year. The threat operates on two registers: national security, where synthetic content is used to distort ground realities during crises, and personal dignity, where AI-generated disinformation is weaponised against individuals. Two striking cases over the past year illustrate these trends.

The Pahalgam terror attack

National security is now tied to the information ecosystem. The Pahalgam terror attack in Kashmir on April 22, 2025, made headlines. The attack was a tragedy. It killed 26 civilians. But the aftermath was an information war. Within hours, Telegram and X were flooded with synthetic narratives. Deepfake videos showed senior military officials talking about "false flag" operations. AI-generated images depicted dead bodies and militant figures as proof of fictitious military victories. These images also used religious and communal iconography to escalate tensions.

Disinformation Tactics Post Pahalgam   Impact and Evidence
GAN-Constructed Footage Forensic pixelation anomalies detected.
Deepfake Lip-Syncing Nearly perfect; identified only by dialect errors.
Fabricated Documents Fake military resignations and advisories.
Aestheticized Violence Ghibli-style illustrations used to drive engagement.

The Indian government’s Press Information Bureau identified seven major instances of misinformation during the crisis. But the damage was done. The fake content delayed official intervention and eroded public trust in the security forces. Adversarial actors used AI to transform photos of mourning families into grotesque dance sequences.

Gendered harm

In January 2026, a user on X became the target of disinformation generated by Grok AI. An individual used her profile picture to produce a fake sexualized image by prompting the AI. (Grok is embedded directly into X, and a user does not need to leave the platform to create harmful fake content). When the user reported the image, X responded that it did not violate their rules. The perpetrator sent further fake images to her private inbox. Subsequently, the Ministry of Electronics and IT (MeitY) issued a notice to X, warning that its failure to observe statutory due diligence under India’s Information Technology Act and IT Rules 2021[1] risked forfeiting its safe harbour immunity as an online intermediary. X’s initial response was deemed “inadequate,” prompting a second notice demanding transparency on Grok’s architecture and filtering mechanisms, signalling a regulatory shift from reactive takedowns toward structural platform accountability.

The Way Forward:

  1. Build a Tiered Risk Classification Framework

MeitY’s November 2025 AI Governance Guidelines provide a laudable and comprehensive framework. However, while the guidelines propose a risk architecture, they carry no statutory force. What may be required is to give it teeth, calibrated to some of the specific risks confronting India. Three categories could attract mandatory pre-deployment compliance: AI-generated content during communal or national security crises; AI capable of producing audio files, videos, and imagery pertaining to real persons; and AI deployed for electoral influence operations. Below these, lower-risk applications continue under voluntary codes. The higher the potential for irreversible harm, the earlier the state must intervene.

  1. Reimagine Platform Liability for Embedded AI Tools

Section 79[2] of the IT Act grants safe harbour only to passive intermediaries. When X embeds Grok into its interface, it is a manufacturer, not a conduit. The Digital Personal Data Protection Act reinforces this as it holds data fiduciaries accountable for how personal data is processed. When Grok uses a real person’s photo to generate harmful content, that is data processing and the platform is the fiduciary. Immunity granted by Section 79 cannot coexist with the accountability required by the DPDP Act. A targeted amendment to the IT Rules could avoid granting safe harbour to any platform that embeds a generative AI tool capable of producing disinformation. A platform that builds the weapon cannot claim ignorance of the wound.

  1. Ensure Mandatory Watermarking at the Point of Upload

The February 2026 amendment to the IT Rules mandates metadata tracing for AI-generated content, which is a step in the right direction. But it does not define how that traceability should work. On the other hand, the DPDP Rules require data fiduciaries to maintain logs of data processing and AI-generated content involving real persons’ likenesses. MeitY could issue binding technical standards specifying the minimum architecture for watermarking standards.

  1. Include a Crisis Disinformation Protocol Under Section 69A of the IT Act

Section 69A[3] grants authority to block content in the interest of national security, public order, or sovereignty. It could be accompanied by an enforcement mechanism that could be implemented more rapidly. During the Pahalgam case, for instance, by the time the PIB identified the synthetic disinformation, the content had already done its work. When a designated authority declares an emergency, significant social media intermediaries[4] should activate AI-detection sweep tools within a defined window and suppress provably synthetic content pending review. The critical question is not whether to act, but who decides what counts as synthetic content. That power cannot rest with the same authority that declared the emergency. The AI Safety Institute, proposed by India’s AI governance guidelines, could be assigned the function of identifying synthetic content. The Institute, and not the government, could conduct forensic testing and set and maintain thresholds in this regard.


Purushraj Patnaik is a Research Assistant with the Centre for Digital Societies at the Observer Research Foundation.


[1] The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, implemented by India on February 25, 2021, regulate social media, digital news, and OTT platforms.

[2] Section 79 in The Information Technology Act, 2000 Exemption from liability of intermediary in certain cases.

[3] Section 69A in The Information Technology Act, 2000, Power to issue directions for blocking for public access of any information through any computer resource.

[4] A Significant Social Media Intermediary (SSMI) in India is a social media platform with over 5 million registered users, subject to stricter compliance under the IT Rules 2021.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Purushraj Patnaik

Purushraj Patnaik

Purushraj Patnaik is a Research Assistant with the Centre for Digital Societies at Observer Research Foundation (ORF). His research focuses on the governance of emerging ...

Read More +