Author : Anulekha Nandi

Expert Speak Raisina Debates
Published on Jul 18, 2024

Research highlights the potential for deepfakes to undermine trust in online identity verification processes, which have come a long way in reducing transaction costs in the financial sector

Deepfakes and potential implications for online identity verification

Deepfakes have predominantly captured headlines for impersonation, reputational damage, and misinformation and disinformation. The World Economic Forum identifies the latter as the most significant short-term risk facing the world today. Indeed, 96 percent of deepfake content online are non-consensual pornographic simulations of female celebrities. As countries around the world grapple with this near 100 percent increase in deepfake content since 2018, there is another threat that lurks in the background i.e. deepfakes and undermining of online know-your-customer (KYC) compliance processes undertaken by banks and financial institutions for identity verification of their customers. 

Security firm, Sensity, released the Deepfake Offensive Toolkit (DOT) for penetration testing of the online identity verification systems of the top 10 vendors and found nine of them extremely vulnerable to deepfake attacks. It involves controllable deepfakes for virtual camera injection, which do not require additional training and can be used in real time on a photo that becomes the target for facial impersonation. DOT was able to manipulate ID images and bypass security protocols mandated by regulators for financial institutions. Sensity used deepfakes to manipulate an ID card to be scanned with the target’s face and then used the same face for a video stream to pass the vendor’s liveness tests. Liveness tests generally ask people to look into the camera, blink, turn their head or smile so that it can be proved that they are a real person and to compare them to the identification presented. Other studies have found how deepfakes can bypass facial recognition systems from Microsoft and Amazon with a success rate of approximately 78 percent.

Security firm, Sensity, released the Deepfake Offensive Toolkit (DOT) for penetration testing of the online identity verification systems of the top 10 vendors and found nine of them extremely vulnerable to deepfake attacks.

Deepfakes rely on deep learning, autoencoders, and artificial neural networks and have gained momentum with evolution in underlying technologies like GAN (Generative Adversarial Networks). GANs, first developed by Ian Goodfellow and colleagues in 2014, involve two competing neural networks wherein the first network is fed data that is representative of the content that is to be produced. The first network learns from the data and produces new content exhibiting the same characteristics as the original data. This new content is then presented to the second network, which is trained to identify flaws rejecting those that it determines as unrepresentative of the original content. This is then returned to the first network so that it can learn from its mistakes and so proceeds the recursive process of self-learning. While GANs represent sophisticated machine learning methods, as the technology evolved so did the ability to package it in websites and software where anyone, even with limited computer skills, can generate synthetic content manipulated by artificial intelligence. 

Trust and transaction costs

e-Governance and digitalisation of public services has led to the management of citizen identity and identification with newer forms of management enabled by digital technologies. e-KYC or electronic identity verification has gone a long way in reducing transaction costs in the financial sector, helping in the easy onboarding of customers. As an illustrative example, in India, this has brought down the cost of KYC to INR 3 from INR 700, with 75 percent reduction in the cost of loan processing. However, DOT testing highlights the potential of undermining trust in such practices. Trust is an important aspect of technology deployment, implementation, and adoption and can be conceptualised across two axes: Technical (technical modes of enhancing system security) and institutional (modes of governance for risk management). With these two axes working in tandem, operationalising trust becomes a relational process, occurring between people or entities and within a given context. This includes a three-place relation between say A (trustor) who trusts B (trustee) to fulfil C (task). As a result, it contains risk and vulnerability on the part of the trustor (A). Societies with optimum levels of trust witness lower transaction costs.

e-Governance and digitalisation of public services has led to the management of citizen identity and identification with newer forms of management enabled by digital technologies.

Systems like DOT and its ability to spoof identity introduces vulnerabilities and widens the relational operation of trust. Individuals and entities at the risk of harm involve not just those whose identity is being spoofed for malicious intent but also the financial institutions vis-à-vis regulators and their ability to detect these emerging forms of risks. It shakes multiple layers of interdependency within financial systems and necessitates rethinking existing compliance structures. The introduction of newer forms of vulnerabilities has the potential to undermine consumer trust in online identity verification systems, thereby raising transactions costs of enforcement and detection. The inability to detect authenticity of a proposed applicant within financial systems institutes the potential for systemic risks that can undermine a crucial pillar of financial services and reverse gains of e-KYC systems.

Developing technical and institutional trust

Anticipating and mitigating emerging attack scenarios like DOT involve developing capacities for technical and institutional trust. ISO/IEC 30107-3, presentation attack detection (PAD) standard does not contain deepfakes as one of the attacks covered. Techniques used by PAD systems like 3-D motion detection and texture analysis can detect deepfakes that are usually applicable to video replay attacks. However, it cannot detect virtual camera injections like the type DOT is capable of since these attacks present more than one attack vector. While artificial intelligence techniques can, to some extent, detect aspects in deepfakes like blinking detection, occlusion, or image forensics, virtual camera injections present a looming and credible threat. 

Anticipating and mitigating emerging attack scenarios like DOT involve developing capacities for technical and institutional trust. ISO/IEC 30107-3, presentation attack detection (PAD) standard does not contain deepfakes as one of the attacks covered.

The use of deepfakes has been more readily recognised in reputational damage or undermining democracy. Jurisdictions around the world have been working to develop laws and regulations for deepfakes in response. However, it is important to understand the emerging risks presented by DOT level functionalities and how it manifests in specific sectors. Similar to laws and regulations governing deepfakes in areas where it has gained more prominence, regulators and policymakers need to work towards enhancing institutional capacity towards addressing sector specific vulnerabilities. 


Anulekha Nandi is a Fellow at the Observer Research Foundation.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Anulekha Nandi

Anulekha Nandi

Dr. Anulekha Nandi is a Fellow in Technology, Economy, and Society at ORF. Her primary area of research includes digital innovation management and governance focusing ...

Read More +