Author : Antara Vats

Expert Speak Digital Frontiers
Published on Jan 15, 2022

MeitY needs to spearhead the initiative to draft a suitable framework to regulate Artificial Intelligence by working closely with relevant stakeholders

Pivotal regulatory considerations for #AIForAll

< style="font-family: georgia, palatino, serif">< lang="EN-US">2021 saw governments make systematic efforts for operationalising ethical principles and frameworks on Artificial Intelligence (AI) published since < lang="EN-US">2017< lang="EN-US">. In 2021, there were concrete proposals for regulating AI being drafted by the European Union (EU) in the form of the < lang="EN-US">Artificial Intelligence Act< lang="EN-US">, in the United States (US) in the form of a < lang="EN-US">bill< lang="EN-US"> from New York on addressing AI bias in hiring, and in China with the draft rules on < lang="EN-US">recommendation algorithms< lang="EN-US">. Efforts for concretising existing principles on AI using technical and legal measures are going to proliferate in < lang="EN-US">2022< lang="EN-US">. This article tracks the regulatory approach on AI as proposed by NITI Aayog and provides pivotal considerations for the Indian Ministry of Electronics and Information Technology (MeitY) as it steps up as the nodal agency.

NITI Aayog’s approach on AI regulation

The premier policy think tank of the Indian government, NITI Aayog in collaboration with the Centre for the Fourth Industrial Revolution at World Economic Forum, published, “Approach Document for India: Part 1 - Principles for Responsible AI (Part 1)” in February 2021 and “Approach Document for India: Part 2 - Operationalising Principles for Responsible AI (Part 2)” in August 2021. Both documents provide an overarching framework of principles and a glimpse into the regulatory approach imagined for reducing AI harms.

The Approach Document Part 1 studied the potential AI risks and proposed broad ethical principles for Responsible AI (RAI), grounded in fundamental rights guaranteed by the Indian Constitution. It also set the stage for Part 2 as it admitted to the need for instating enforcement mechanisms to translate the proposed principles into practice. The Approach Document Part 2 provided an agile approach for operationalising RAI by employing a combination of self-regulation and governmental regulation. The document recognises that a one-size-fits-all approach would not be effective and instead proposes a risk-based approach. The suggested regulatory interventions would respond proportionately to the magnitude, nature, and probability of risks from the design, development, and deployment of specific AI use cases. The document states the underlying principle for this approach as, “the greater the potential for harm, the more stringent the requirements and the more far-reaching the extent of regulatory intervention.” Self-regulation will be employed where the risk of harm is less and legislative intervention is proposed where the risk of harm is significant like in the use of AI for predicting criminals before the crime was committed,  where fundamental rights could be violated. It has also put forward the proposal for a multidisciplinary Council for Ethics and Technology (CET) to assist sectoral regulators in ethical risk profiling of AI use cases by drawing from existing case studies across sectors and conducting research. This independent think tank may also be tasked with the crucial job of driving convergence amongst different sectoral regulators.

Policymakers recognise that a one-size-fits-all approach would not be effective and instead proposes a risk-based approach. The suggested regulatory interventions would respond proportionately to the magnitude, nature, and probability of risks from the design, development, and deployment of specific AI use cases.

Both documents display a visible shift from the traditional model of command and control regulation to an approach that provides enough room for the technology to grow while ensuring public safety. The traditional model of regulation would be unable to keep up with the rapid pace of AI development due to the many friction points inherent in the existing system like sectoral overlaps and bureaucratic structures. For instance, India’s flagship National Programme on AI (NPAI) by MeitY aims to lay a strong foundation for a sustainable ecosystem for AI and is still waiting for cabinet approval. The confusion on the nodal agency between NITI Aayog and MeitY is one of the core reasons for the delay in NPAI’s execution. The committee headed by K. Vijay Raghavan, Principal Scientific Advisor to the Prime Minister, was set up to resolve this and, finally, cleared the air in September 2020 when it declared MeitY to be the nodal agency. Moreover, the command and control form of regulation requires the regulating body to specify the compliances in more or less specific terms accompanied by enforcement powers and sanctions. The dynamic nature of AI further complicates the task of imagining every possible use case or risk emanating from it. Additionally, computations signify correlations, making it hard to audit and decipher why a particular algorithmic outcome was displayed.

The Economic Survey 2020-21 pointed out that the Indian administrative processes tend to overregulate by pre-empting every possibility inevitably leading to opacity in decision making. As a solution, it proposes to move away from the traditional mode of regulation to principles-based regulation. This shift will allow for regulatory discretion but has to be balanced with increased transparency, systems for ex-ante accountability and ex-post resolution mechanisms. NITI Aayog has proposed a stable principles-based framework and has assigned CET with the task of identifying the specifics for each use case. However, there are still some questions like the lack of mechanisms for risk assessment and a compliance framework for already deployed use cases of AI, as well as membership criteria for CET, which have remained unanswered in the documents but are important to ensure transparency and accountability. Going forward as NPAI takes shape, MeitY must build upon the approach documents by NITI Aayog to address the questions that will be central to determining the effectiveness of the proposed approach.

Mechanisms for risk assessment

The legitimacy of the risk-based regulatory approach hinges upon the mechanism adopted for the identification and benchmarking of risks. This needs to be addressed at the outset as the entire regulatory approach is based on responding to the level of risk and in the absence of clarity on the benchmarks adopted for the division, the proposal is incomplete. NITI Aayog has provided an approach of identifying and defining ethical considerations or impact of the risks in two broad categories—systems and societal considerations—through examples in Part 1. These considerations deal with AI systems that are designed to solve specific challenges or Narrow AI. Systems considerations include direct impact to affected stakeholders due to outcomes guided by the design choices of AI systems, like privacy risks for children with the use of Facial Recognition Technology (FRT) for marking attendance in schools. Societal considerations include indirect impact to affected stakeholders due to settings in which the AI solutions have been adopted, like the impact of automation on jobs of citizens. Unlike other jurisdictions that have adopted a similar risk-based approach and provided the scale of risk—unacceptable risk, high risk, limited risk, and minimal risk—like the EU, NITI Aayog has refrained from defining the scale and the accompanying risk matrix with scores of likelihood and consequence.

NITI Aayog has provided an approach of identifying and defining ethical considerations or impact of the risks in two broad categories—systems and societal considerations.

Compliance for already deployed use cases 

Many civilian use cases of AI have already been deployed in India to further #AIForAll. MeitY in collaboration with IndiaAI portal released, “75@75 India’s AI Journey” as the country celebrates 75 years of independence. Rajeev Chandrashekar, the Indian Minister of State for Electronics and IT, in the foreword of the document stated that, “the Government gives high priority to the mandate for digital transformation, which include focusing on innovations in public service delivery, high-speed connectivity networks, cyber strategies, quantum computing and AI”. This collection provides a glimpse into the potential of the socio-economic challenges that AI can solve in areas like healthcare, sanitation, education, and last-mile delivery of public services. Unfortunately, deployed AI use cases also include some systems that increase the risks for consumers. For instance, states like Delhi and Tamil Nadu have adopted FRT for marking attendance in schools to increase efficiency despite privacy risks, limitations of the technology in accurately identifying children, and so on. While Part 2 mentions that various risks could be addressed with existing laws, it does not provide a framework to ensure the compliance of these use cases with NITI Aayog’s principles or existing laws. The product liability regime to ensure the protection of consumers under existing legislative frameworks like Consumer Protection Act 2019, Indian Penal Code, 1860, Bureau of Indian Standards Act, 1986 for AI products and services should also be studied by CET to prevent AI harms.

Distributed yet meaningful efforts for AI governance

Until the implementation of regulatory mechanisms, Part 2 has proposed distinct roles for industry, government, and academia to seed the foundations of responsible AI. These include extending support for technical and cross-disciplinary research along with multidisciplinary stakeholder engagement between industry and government under CET. This collaborative approach provides the government regulator with the apparatus to address the difficult questions that have remained unanswered and implement proposals that balance innovation while responding proportionately to risks and equitably distributing the benefits. For instance, the Ministry of Civil Aviation has adopted an incremental approach for regulating Beyond Visual Line of Sight (BVLoS) deliveries using drones under Drone Rules 2021 by being mindful of industry feedback and conducting experiments to build a nuanced understanding of the limitations or risks of the technology.

However, this approach has to be supported by a well-defined membership criterion for stakeholders to contribute meaningfully to CET and its working groups. In 2018, MeitY had constituted four committees to propose policy frameworks on a) leveraging AI for identifying national missions in key sectors, b) platform and data for AI, c) mapping key technological abilities like skilling, and d) cybersecurity, safety, legal and ethical issues. These committees did not have any participation from legal experts and social scientists. It is essential to ensure their participation in such discussions as numerous definitions of values like “fairness” exist and approaches adopted in the quantitative domain often fail to adequately acknowledge the nuances of social structures and their power dynamics. A diligent multi-disciplinary collaboration will ensure that value trade-offs are cautiously evaluated for public safety.

NITI Aayog’s optimistic and ambitious approach for AI regulation is a creditable one. The cautious combination of self-regulation and government-led regulation provides ample opportunity for the industry to step up and circumvent overregulation by developing mechanisms that respond to the principles and contextual requirements.

The use of AI in financial services, hiring, health, education, social media platforms, and law enforcement will be under strict scrutiny all across the globe in 2022. NITI Aayog’s optimistic and ambitious approach for AI regulation is a creditable one. The cautious combination of self-regulation and government-led regulation provides ample opportunity for the industry to step up and circumvent overregulation by developing mechanisms that respond to the principles and contextual requirements. In any case, MeitY will have to work with stakeholders to design frameworks and assign responsibilities to ensure streamlined compliance of principles and develop grievance redressal mechanisms for citizens.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.