With the escalating threat of deepfakes to the global financial sector, the Indian government’s multi-layered response makes long strides in securing digital finance through horizontal regulation without sector-specific mandates
A 2025 analysis of Artificial Intelligence (AI) scams in India reports that 47 percent of Indian adults have either been victims of, or know someone who has been a victim of, an AI voice-cloning or deepfake scam, nearly double the global average of 25 percent. The same report notes that 83 percent of Indian victims of AI voice scams suffered monetary loss, with almost half losing over INR50,000, highlighting the rapid growth of deepfake-enabled fraud in the Indian financial ecosystem. Within this rapidly evolving threat landscape, one of the most concerning developments for financial institutions is the emergence and proliferation of deepfakes as tools for cyber-enabled financial crime.
AI and machine learning have advanced to the point where synthetic content can be rendered with such fidelity that most individuals are unable to distinguish it from authentic media. These developments have opened significant opportunities for innovation and efficiency; however, they have also enabled new forms of fraud, impersonation, and large-scale deception that challenge existing legal, technical, and organisational safeguards.
Deepfakes significantly amplify financial sector risk by weaponising seemingly realistic synthetic media across market manipulation, information security breaches, fraud, regulatory non-compliance, and reputational damage.
The term ‘deepfake’ derives from the combination of ‘deep’, referring to deep learning (DL), and ‘fake’, indicating fabricated or altered content. It generally denotes the manipulation of existing media—images, videos, or audio recordings—or the generation of entirely new, synthetic media using DL-based methods trained on large datasets. In practical terms, deepfakes encompass a range of outputs, including forged facial imagery, synthetic speech that mimics the voice characteristics of real individuals, and composite videos that integrate both manipulated visuals and fabricated audio. In the context of the financial sector, these capabilities create novel attack vectors for social engineering, identity theft, and high-value fraud, and pose urgent questions about detection, regulation, and organisational resilience.
Deepfakes pose multifaceted risks to financial institutions and their customers by threatening data integrity, privacy, and operational security. They can be deployed to manipulate or fabricate transactions, alter communications, and disrupt internal decision-support systems, thereby undermining confidence in the authenticity of digital records. As a result, they introduce substantial risk into business decision-making processes, since managers and automated systems may rely on falsified audiovisual evidence when assessing clients, approving transactions, or responding to market signals. In India, this risk has begun to materialise in mainstream finance and retail investing, with deepfake-based investment pitches impersonating central bank and market officials to lend false legitimacy to fraudulent schemes.
Deepfakes significantly amplify financial sector risk by weaponising seemingly realistic synthetic media across market manipulation, information security breaches, fraud, regulatory non-compliance, and reputational damage. They enable the rapid spread of fabricated content that can influence market prices, highly tailored impersonations that bypass authentication, and scalable synthetic identities that corrupt onboarding and hiring—ultimately undermining confidence in digital financial services and official communications. In 2024, a staff member at a Hong Kong-based engineering firm fell victim to a deepfake scam during a video conference call. The attackers impersonated the company’s CFO and colleagues using AI-generated videos, convincing the employee to authorise transfers amounting to approximately US$25 million from the company’s bank accounts. The case illustrates how convincing executive impersonation in a routine corporate setting can bypass human and process controls, leading directly to significant financial loss and organisational risk.
Deepfakes also threaten biometric systems by critically lowering the reliability of facial recognition and voice authentication, thereby weakening mechanisms that many financial institutions rely on for secure remote access and customer verification.
These dynamics have broader implications for international financial stability and security. For example, deepfake-enabled disinformation can be used by state or other politically motivated actors to manipulate stock markets through forged statements or fabricated appearances of officials. In May 2023, a pro-Russian account circulated an AI-generated image of an explosion near the Pentagon, briefly causing the Dow Jones Industrial Average to drop by 85 points within four minutes. Deepfakes also threaten biometric systems by critically lowering the reliability of facial recognition and voice authentication, thereby weakening mechanisms that many financial institutions rely on for secure remote access and customer verification. As a result, the technology systematically undermines traditional security controls based on the uniqueness of biometric traits.
The proliferation of deepfakes also reshapes user behaviour and perceptions of risk. As synthetic media become more prevalent, concerns about deception intensify, fostering reluctance among customers to trust digital interactions, remote advisory services, and online onboarding processes. Deepfakes undermine traditional mechanisms designed to protect user privacy, eroding confidence that personal data and likenesses will not be weaponised against individuals. Multimodal deepfakes, which combine audio, video, and textual cues, further increase believability, making it significantly harder for users to distinguish legitimate messages from malicious ones and thus raising the success rate of social engineering campaigns. A recent global survey by the UK-based tech firm iProov found that 49 percent of respondents reported lower trust in social media after learning about deepfakes, and 74 percent expressed concern about their broader societal impact. Indian consumers have already been targeted by deepfake investment advertisements featuring prominent public figures such as Sudha Murty, with victims reporting substantial losses, reinforcing public anxieties about the safety of sharing personal images and recordings in an increasingly AI-mediated financial environment.
Empirical evidence shows that hostile content can reach large audiences at minimal cost—as low as US$0.07 per view—enabling deepfake-driven narratives or scams to achieve mass-scale proliferation in a short period.
Finally, deepfakes exploit and magnify the vulnerabilities of social media and digital communication platforms as vectors for rapid dissemination. Empirical evidence shows that hostile content can reach large audiences at minimal cost—as low as US$0.07 per view—enabling deepfake-driven narratives or scams to achieve mass-scale proliferation in a short period. In the financial context, this includes videos impersonating financial advisers, investment bankers, or subject-matter experts to promote fraudulent investment schemes or manipulate retail investor sentiment.
Addressing these challenges requires a combination of technical countermeasures, such as more resilient biometric and authentication systems, alongside stronger privacy and financial regulation tailored to synthetic media.
The Indian government has addressed deepfake threats, including those in the financial sector, through an integrated strategy of enhanced cyber laws, advisories to digital platforms, and institutional strengthening. Though technology-neutral in wording, these measures directly tackle AI-driven misinformation, impersonation, and identity theft that fuel financial scams and cyber-fraud.
| Legislation | Obligations | Relevance to Deepfake Fraud |
| IT Act 2000 | Provides framework for electronic records, signatures, and penalties for identity theft (s.66C), cheating by personation (s.66D), and privacy violations (s.66E). | Criminalises synthetic identities and impersonations used in scams. |
| IT Intermediary Rules 2021 (amended 2022/2023) | Requires "due diligence" to prevent hosting/sharing of unlawful content, including misinformation and impersonation; Rule 3(1)(b) mandates user notifications in the preferred language and proactive content removal. | Platforms must block deepfake videos of officials or fraudulent advice, with safe-harbour loss for non-compliance. |
Source: Author’s creation
This table outlines the core laws and rules imposing proactive content moderation on platforms, creating a baseline to curb deepfake dissemination in financial scams.
The advisories issued by the Ministry of Electronics and Information Technology (MeitY) inDecember 2023 compel intermediaries to explicitly prohibit AI deepfake misinformation. Platforms must notify users during registration, login, and uploads, citing penalties under the IT Act and the Bharatiya Nyaya Sanhita. This ensures deepfakes mimicking financial regulators, bank officials, or experts are banned in terms of service and swiftly removed to prevent their dissemination among retail investors.
A follow-up advisory within six months mandates the removal of deepfakes and misinformation within 36 hours of user or government complaints. Non-compliance activates Rule 7 of the IT Rules 2021, stripping safe-harbour immunity under Section 79 of the IT Act and inviting civil or criminal action. This mechanism drives the rapid removal of fraud-promoting deepfakes and impersonations to prevent scams and market manipulation.
The government views deepfakes as a significant obstacle to its goal of creating a “safe, trusted, and accountable” cyberspace, and has therefore established a multi-layered institutional framework to detect, restrict, and respond to deepfake-driven harms that threaten this objective. Complementing the IT Act are the Digital Personal Data Protection Act 2023 and Bharatiya Nyaya Sanhita 2023, which criminalise identity theft, impersonation, privacy breaches, disinformation, and organised cybercrime—all of which are vectors for advanced financial fraud. Key institutions include the Indian Cyber Crime Coordination Centre (I4C), CERT-In, the National Cyber Crime Reporting Portal, and the Grievance Appellate Committees. These enable reporting, automated takedowns, and coordinated action against AI threats for citizens and financial victims.
Complementing the IT Act are the Digital Personal Data Protection Act 2023 and Bharatiya Nyaya Sanhita 2023, which criminalise identity theft, impersonation, privacy breaches, disinformation, and organised cybercrime—all of which are vectors for advanced financial fraud.
For finance, these indirect measures prove effective: they criminalise data and identity misuse, enforce deepfake removal across platforms, and strengthen cyber response capacity. This constrains scam infrastructure without deepfake-specific banking rules, leveraging broad cyber, data, and platform norms to protect India's digital financial growth.
India’s multi-layered approach substantially mitigates deepfake threats to the financial sector via fortified legal foundations, augmented platform accountability, and enhanced institutional resilience. It mandates expeditious removal of impersonation content within 36 hours, thereby curtailing scam dissemination; criminalises identity misuse and disinformation underpinning financial fraud; and facilitates rapid incident reporting with coordinated governmental response. To augment efficacy, this horizontal architecture encompassing cyber law, data protection, and enforcement should integrate intensified public awareness campaigns, financial literacy programmes for vulnerable demographics, and lessons from counterparts like the European Union’s EU AI Act and Singapore’s deepfake detection mandates, thereby securing the digital financial ecosystem sans sector-specific legislation.
Pranoy Jainendran is a Research Assistant with the Centre for Security, Strategy and Technology at the Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Pranoy Jainendran is a Research Assistant with ORF’s Centre for Security, Strategy & Technology. His work examines how technology shapes State institutions, national and international affairs, ...
Read More +