Author : Tanya Aggarwal

Expert Speak Young Voices
Published on Mar 28, 2024

In a world where the legal vacuum surrounding tech and AI persists, how do we safeguard the citizens from the adverse effects of a rapidly developing technological landscape?

Navigating the deepfake dilemma: Government oversight in the age of AI

Deepfake, once a tech jargon, is now part of the everyday vocabulary. Recently, Indian cricketer Sachin Tendulkar’s video promoting an online gaming app went viral. This makes him the latest victim of a deepfake video manipulation that uses his voice and face without consent. In an era where deepfake videos and photos can distort our perception of reality and the internet becomes a breeding ground for misinformation, the role of governments in safeguarding citizens from the rapid development of AI comes under scrutiny. This article explores the evolution of deepfake technology and the challenges governments face in catching up with its advancements.

Generative AI and deepfake technology

For effective regulations, it is important to understand the mechanisms behind Generative AI and deepfake: both are part of deep learning that involves training artificial neural networks on large amounts of data to decipher and learn patterns. “Deep” refers to the multiple layers through which the data is transformed during learning. The goal of deep learning is to enable computers to automatically learn and make decisions or predictions without explicit programming, relying on the patterns and representations discovered in the data. Deep learning is integrated into multiple sectors of development. It can be used to improve the quality of education, healthcare, and law enforcement. It is a part of our everyday tools such as digital assistants (also known as chatbots) and emerging technology such as self-driving cars.

The goal of deep learning is to enable computers to automatically learn and make decisions or predictions without explicit programming, relying on the patterns and representations discovered in the data.

As a subsect of deep learning, Generative AI is a “system trained on open-source content and historical data sets to generate new and unique creative outputs”. Deepfake technology uses Generative AI to alter images and videos. While the term “deepfake” has a negative connotation given the recent events, the technology itself has positive uses; some uses include making education more interactive, easing customer service, and saving labour time. However, the misuse and manipulation of technology have made it more dangerous than beneficial to society.  Deepfake technology can be harmless when used for recreational purposes like face-swapping or ageing yourself applications that are popular on social media. The harm happens when it is used to mislead and manipulate the public, in instances of revenge pornography or voice manipulation. This is when government oversight and regulation are key in protecting its citizens.

India’s Current AI Realm

Indian government's use of AI was first laid out in 2018 in the NITI Aayog’s National Strategy for Artificial Intelligence. It discusses the use of AI for healthcare, agriculture, smart cities, infrastructure, transportation, and education. It acts as the official roadmap for AI research and development and poses ethical considerations in Fairness, Accountability, and Transparency. To back this up, NITI Aayog released a Responsible AI document in 2021 that mitigates the dangers of AI, mainly focusing on Facial Recognition Technology. While it does touch on concerns regarding privacy issues, it does not address more nuanced topics such as deepfakes, as the technology itself was in nascent stages.

Regulations are often crafted based on the technology available at the time of their conception. To become laws, policies must go through multiple stages including stakeholder consultation, regulatory, and parliamentary approvals. This time-consuming process, while necessary, also creates a legal vacuum between the development of technology and the protection of users. The Digital Personal Data Protection Act (DPDP) passed in 2023 is a step in the right direction, however, it focuses on the processing of individual data by companies and third-party intermediaries rather than deepfakes. Currently, multiple provisions of the Information Technology Act 2000 and 2021, in addition to the Bharatiya Nyaya Sanhita, can be used to penalise those misusing this technology.

The Digital Personal Data Protection Act (DPDP) passed in 2023 is a step in the right direction, however, it focuses on the processing of individual data by companies and third-party intermediaries rather than deepfakes.

No official laws have been introduced to protect citizens from such rapid development of technology but with deepfake instances on the rise, the government has said that it is drafting laws to address the issue specifically and “directed social media and tech companies to take immediate steps against the menace, or be prepared to face penal action”.  In the new draft proposal, officials have discussed taking possible action not only on the individual responsible for creating and uploading the content but also on the platform it was published.

When technology can be used to manipulate and influence the public, the government needs to step in and protect its citizens. The rapid pace of development makes oversight and regulation a next-to-impossible task to pre-empt where the gaps between use and privacy lie. As AI becomes more pervasive in society, the question arises: can governments effectively protect citizens from the unintended consequences of these rapid advancements?

What does the future hold? 

While the National Strategy and Responsible AI framework laid down foundational principles for responsible AI use in India, they may not comprehensively address issues related to deepfakes and the consequent need to protect individual privacy. India is not alone in this conundrum. Governments across the world are facing this challenge, struggling to adapt policies to address new and unforeseen challenges. A policy crafted in 2019 may not adequately address the nuanced challenges seen in 2024. Governments must continually reassess and update policies to keep pace with emerging threats, ensuring the protection of citizens in an ever-changing technological landscape.

Striking a balance between innovation and regulation is essential to ensure that governments can effectively protect citizens from the potential harms of rapidly advancing AI.

To particularly address the issue of deepfakes, some governments have attempted to introduce legislation or use a combination of existing legal frameworks to deal with perpetrators. These include marking and labelling videos with altered content and imposing fines on individuals as well as platforms that fail to do so. For example, in the United States (US), President Joe Biden signed an Executive Order “on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” that entailed the labelling of altered content with immediate effect. Additionally, some Congressmen have introduced a “DEEPFAKES Accountability Bill” in the US Congress that aims to put criminal charges against social media platforms that fail to flag deepfake videos. Meanwhile, the European Union (EU) has implemented a combination of the Code of Practice on Disinformation and the Digital Services Act that monitors and regulates deepfakes. It has also proposed an EU AI Act that would increase transparency. Victor Riparbelli, the co-founder of Synthesia, an online video-editing platform that can be used to create deepfake videos, suggests strengthing existing frameworks that focus on intent, such as bullying and discrimination, rather than content. There is also a need to increase public awareness and invest in education regarding the dangers that deepfakes pose. 

In a world where the legal vacuum surrounding tech and AI persists, questions arise about the accessibility of AI tools to the public. Striking a balance between innovation and regulation is essential to ensure that governments can effectively protect citizens from the potential harms of rapidly advancing AI. As we navigate this complex landscape, the need for proactive, adaptable, and comprehensive regulatory frameworks becomes paramount in safeguarding the interests of society. The conversation surrounding protecting citizens from misinformation and deepfakes, while safeguarding the freedom of speech and expression needs to take the front stage due to the rapid development of AI. Deepfake technology needs its oversight, rather than being lumped in a general “dangers of Generative AI” category.


Tanya Aggarwal is a Research Intern at the Observer Research Foundation

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Tanya Aggarwal

Tanya Aggarwal

Tanya Aggarwal is a Research Assistant at the Center for Security, Strategy, and Technology at ORF. Her research focuses on the intersection between technology and ...

Read More +