Author : Soumya Awasthi

Expert Speak Digital Frontiers
Published on Apr 24, 2025

The rising use of artificial intelligence and deepfakes has enabled identity theft. This multifaceted challenge demands robust, immediate, and coordinated solutions. 

The Rising Threat of Dual-Use Technology: A Looming Crisis in the Age of AI

Image Source: Getty

A recent report revealed that artificial intelligence (AI) models, such as ChatGPT, can generate fake Aadhaar and PAN cards, raising fresh concerns about the dual-use potential of AI. Although some experts have sought to allay public fears by highlighting the current safeguards that prevent widespread abuse, the incident underscores a more profound, systemic issue. As AI technologies become more sophisticated and accessible, the prospect of misuse by malicious actors has appeared as a genuine and persistent threat. Against this backdrop, the broader implications of dual-use technology must be critically examined.

Emerging Threat to National Security

AI has transformed myriad industries, offering unparalleled convenience and innovation. However, alongside its exponential growth, AI has introduced intricate ethical, social, and security challenges that society is only beginning to comprehend. The recent revelations surrounding the ability of AI models such as ChatGPT to generate forged Aadhaar and PAN cards highlight a disconcerting dual-use phenomenon. At the heart of this issue lies the very nature of technology itself: artificial intelligence, by its design, can be yoked for both constructive and destructive purposes. While AI enhances healthcare, education, and logistics, the same technology can be used to execute fraudulent activities, cyberattacks, and misinformation campaigns. The ability of AI models to produce convincing fake identification documents signals a perilous new edge where criminal enterprises, terrorist organisations, and rogue states can exploit this power to undermine national security and social order.

The recent revelations surrounding the ability of AI models such as ChatGPT to generate forged Aadhaar and PAN cards highlight a disconcerting dual-use phenomenon.

India’s recent experiences magnify these concerns. The CoWIN data breach exposed vulnerabilities in critical public health databases, while AI-powered deepfake scams besieged corporate entities by imitating senior executives to orchestrate financial fraud. According to the McAfee report, 83 percent of Indians have lost money due to fake voice calls. During electoral seasons, AI-enabled bot networks and deepfake videos inundated social media platforms with disinformation, altering public opinion and threatening democratic procedures. Moreover, banning Chinese-origin AI apps highlighted how AI-driven data mining operations directly threatened sovereignty and personal privacy. Predatory loan apps have weaponised AI analytics to extract critical user data and conduct forced recovery campaigns. With the rising use of digital payment platforms, scammers have found an opportunity to create clones of UPI app platforms like Gpay, PhonePe, and Paytm. Meanwhile, the potential misuse of LLMs, such as ChatGPT, for making fake official documents has made identity theft by cyber criminals further convenient. These incidents reveal a growing pattern where both state and non-state actors leverage AI’s dual-use capabilities, either for power consolidation or criminal exploitation, xdrevealing systemic vulnerabilities.

Potential Security Threats

Another dimension of this security risk is the exploitation of fake documents by foreign nationals attempting to enter India illegally. For instance, multiple reports have recognised cases in which Bangladeshi refugees and Rohingya migrants acquired fake identification documents to settle within Indian territory unlawfully. Similarly, there have been cases where Pakistani citizens used false credentials to mask their identities during clandestine operations aimed at destabilising internal security by joining paramilitary forces.

The emergence of AI-generated Aadhaar or PAN cards exacerbates this vulnerability exponentially. Border regions, such as West Bengal, Assam, Punjab, and Jammu and Kashmir, which are already strained by migration pressures, could witness a dramatic spike in infiltration. Traditional border controls and verification processes would struggle to intercept such digitally armed infiltrators, heightening the complexity of securing national borders in an AI-driven world.

The vetting processes at security checkpoints primarily rely on document authenticity could be rendered obsolete if sophisticated forgeries escape traditional detection methods.

Perhaps even more alarming is the potential threat to national security. Should terrorist elements gain access to fake documents, the repercussions could be catastrophic. Entry into sensitive locations such as nuclear facilities, military installations, or government buildings becomes possible. The vetting processes at security checkpoints primarily rely on document authenticity could be rendered obsolete if sophisticated forgeries escape traditional detection methods. Infiltrators could orchestrate attacks, sabotage critical infrastructure, or gather classified intelligence under assumed identities. In this context, national defence becomes vulnerable not through traditional warfare but through subtle, invisible incursions enabled by technological prowess.

Although mainstream AI platforms have safeguards to prevent the creation of illegal or malicious content, cybercriminals continue to bypass controls through adversarial prompting or by deploying customised AI models with fewer ethical restrictions. Unlike traditional cybercrime, which often required specialist expertise, AI-driven offences can now be executed even by amateur actors, intensifying the extent and frequency of threats. Without proactive governance frameworks, ethical oversight, and international cooperation, the dual-use dilemma of artificial intelligence risks destabilising national security, eroding public trust, and imperilling the foundational structures of modern society.

The Collapse of Trust: Social Security and Identity Verification

One immediate concern stemming from AI's dual-use potential is the erosion of social security frameworks. The explosion of AI-generated forged documents could impact these systems. The rise of deepfake technology presents a unique threat to the foundations of trust that risks identity verification systems within the financial sector. Traditional apparatuses such as voice authentication, biometric verification, and personal interactions are gradually exposed to mock media and can persuasively imitate individuals.

Financial institutions, which rely on identity authenticity for transactions, compliance, and customer engagement, face a failure of trust that could undermine their core functioning. Attackers abusing deepfakes can circumvent security protocols, initiate fraudulent transfers, manipulate financial markets, and deceive professionals. The combined effect of AI and cybercrime could usher in a new era of decentralised, anonymous, and highly efficient terror funding networks that traditional counter-terrorism methods would struggle to dismantle.

Furthermore, when personal data is combined with forged identification documents, it becomes an extremely valuable commodity in illicit markets on the dark web. AI can aggravate the harvesting, packaging, and selling of stolen identities. Identity theft, already a pressing issue, would rapidly grow, victimising millions for crimes they did not commit.

The combined effect of AI and cybercrime could usher in a new era of decentralised, anonymous, and highly efficient terror funding networks that traditional counter-terrorism methods would struggle to dismantle.

In such an environment, trust in digital identity is eroded, and the broader public confidence in financial institutions is at risk. As institutions adopt increasingly digital platforms, deepfakes blur the line between legitimate and fraudulent interactions, making conformist identity verification methods insufficient. Without an all-inclusive security framework that combines technological innovation with human vigilance, the financial sector risks a systemic collapse, eventually jeopardising trust in the economic systems that rely on identity verification.

Towards a Robust Response: Regulation, Innovation, and Public Awareness

Addressing such a multi-faceted challenge demands robust, immediate, and coordinated solutions. The first step lies in the development of advanced AI governance frameworks. Policymakers must move beyond voluntary codes of conduct towards binding international agreements that regulate AI systems' deployment, access, and capabilities. AI developers should be mandated to incorporate fail-safe mechanisms, including dynamic monitoring and real-time auditing, to detect and halt the generation of harmful content. Governments must also invest in specialised AI watchdog agencies empowered to enforce compliance, investigate misuse, and adapt regulations as technology evolves.

Simultaneously, security protocols must evolve in sophistication. Traditional document verification systems rely solely on visual inspection and are insufficient against AI-generated counterfeits. Biometric authentication, blockchain-backed identity management, and encrypted verification tokens offer promising alternatives that should be integrated widely. Security agencies must be equipped with AI-driven forensic tools capable of detecting synthetic alterations and metadata inconsistencies and documenting watermarking anomalies.

Biometric authentication, blockchain-backed identity management, and encrypted verification tokens offer promising alternatives that should be integrated widely.

Another vital solution lies in public education and awareness. Citizens must be sensitised to the realities of digital vulnerabilities. Ultimately, the battle against the misuse of dual-use technologies in the age of AI is not a battle against progress itself, but a struggle to ensure that progress remains a force for good rather than a harbinger of unforeseen crises.

Conclusion

The revelations surrounding ChatGPT’s capacity to generate fake Aadhaar and PAN cards are more than standalone incidents; they indicate a broader, more worrying trend. As artificial intelligence continues to evolve, so will its potential for misuse, touching every aspect of society from national security to financial integrity and social trust. Combating these risks requires an all-encompassing approach. Binding international regulations, advanced AI-driven verification technologies, empowered oversight agencies, and widespread public awareness are all necessary mechanisms of a comprehensive defence. Failure to act authoritatively now risks conceding control to the criminals who will misuse technological progress for personal or political gain. The urgency to govern AI ethically and effectively has never been more pressing, for in safeguarding technology, we protect society.


Soumya Awasthi is Fellow, Centre for Security, Strategy and Technology at Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Soumya Awasthi

Soumya Awasthi

Dr Soumya Awasthi is Fellow, Centre for Security, Strategy and Technology at the Observer Research Foundation. Her work focuses on the intersection of technology and national ...

Read More +