Expert Speak Digital Frontiers
Published on Jan 30, 2024

To ensure that AI’s benefits are realised and risks are addressed, algorithmic auditing can help analyse how these systems operate, and in the process, mitigate wider societal harms

Auditing AI: What is it and why does it matter for India?

This essay is part of the series: AI F4: Facts, Fiction, Fears and Fantasies.


With increased access to Artificial Intelligence (AI development tools and datasets, businesses, nonprofits, and government entities in India are deploying AI systems at an unprecedented pace, often impacting millions of users. In less than a year of its inauguration, the Indian government’s facial recognition algorithm—DigiYatra—is being used by over 1.74 million passengers to board flights. From conversational AI chatbots such as Haptik to ML-driven content generation platforms like ShareChat, India’s AI startups are fast growing and have impacted over half a billion users individually. In the midst of this widespread deployment, however, come valid concerns about the propensity of such algorithmic systems to replicate, reinforce, or amplify harmful existing social biases. To ensure that AI’s benefits are realised and risks are addressed, algorithmic auditing can play a crucial role in analysing how these systems operate, whether they are functioning as intended, and in process, mitigate potentially wider societal harms. India’s diverse population has unequal access to digital services, which can produce biased datasets. Considering India’s public sector already relies on algorithmic decision-making to boost efficiency, forthcoming legislation must consider the importance of auditing AI systems.

To ensure that AI’s benefits are realised and risks are addressed, algorithmic auditing can play a crucial role in analysing how these systems operate, whether they are functioning as intended, and in process, mitigate potentially wider societal harms.

What is algorithmic auditing? 

Unlike financial audits that are well-established, professionalised, and regulated along clearly defined parameters, there is a lack of consensus on what constitutes AI algorithmic audits. However, they are generally seen as a way to explicitly present evidence of how AI deployments fall short of performance claims. Auditing an algorithm would involve testing it in different environments to understand its functioning and assessing it with respect to some predefined normative standards, such as fairness, transparency, and interpretability. Such audits can be first-party (e.g., conducted by internal teams within companies), second-party (conducted by contractors), or third-party (conducted by independent researchers or entities with no contractual relationship to the audit target).

Why does it matter?

Algorithmic systems can propagate racism, classism, sexism, ableism, and other forms of discrimination that cause real-world harm. Top-performing facial recognition systems have misidentified darker-skinned females at rates five to 10 times higher than white males. Apple’s credit card algorithms, used to determine the creditworthiness of applicants, have systematically given female customers nearly 20 times lower credit lines than men. A group of researchers that tested 13 publicly available natural language processing models found significant implicit bias against people with disabilities across all the models. Such biases tend to perpetuate existing stereotypes and are present due to various reasons, including lack of diversity in training data, developer biases, and improper metrics.

Top-performing facial recognition systems have misidentified darker-skinned females at rates five to 10 times higher than white males.

Biased algorithms, especially when applied to a country as diverse as India, present an issue that needs solving. With a population of 1.4 billion, India generates vast amounts of data every day—notionally providing excellent training material for AI models. However, less than 50 percent of Indians are internet users, and approximately 33 percent use social media. Internet access isn’t randomly distributed either and has unequal distribution across gender, caste, region, rural-urban areas, etc. A poll of internet users undertaken by Google found an under-representation of Muslim and Dalit populations across collected datasets due to their lack of internet use, raising the likelihood of future algorithms producing biased results in the Indian context.

An increasing use of AI algorithms in India’s public sector further complicates the situation. India’s resource constraints demand governance be as efficient as possible; this is one of the reasons for automating processes using AI/ML. With one of the world’s poorest police-to-population ratios, at present, over a dozen state law enforcement agencies in India are using facial recognition algorithms for identifying criminals. However, faulty AI models would see governance mechanisms used sub-optimally or even counterproductively, in some cases causing more problems than they solve. The Telangana government, for instance, used ML systems to make predictions about people’s behaviour while processing thousands of welfare scheme applications. This ultimately led to the cancellation of 100,000 fraudulent ration cards, 14,000 of which had to be reinstated subsequently.

What is being done?

While audits are an important mechanism for public sector accountability, they have not been applied to the use of algorithmic systems in India. To date, discussions around the topic remain largely ad-hoc exercises conducted under the wider ambit of certain government agencies. Most recently, the Comptroller and Auditor General (CAG) of India, as part of the SAI20 Engagement Group Summit under the G20, called for initiatives to develop auditing frameworks and granular checklists on AI. But this discussion appears to be in its nascent stage, with a focus on existing country case studies. While the mandate involves both auditing AI algorithms and also using AI as an auditing tool, all of India’s examples fall in the latter category.

The Comptroller and Auditor General (CAG) of India, as part of the SAI20 Engagement Group Summit under the G20, called for initiatives to develop auditing frameworks and granular checklists on AI.

Nevertheless, the importance of audits has already been noted across key government agencies in India. Apart from the CAG, NITI Aayog’s 2021 approach document for responsible AI also emphasised the need to establish mechanisms for performing algorithmic audits by independent and accredited auditors at periodic intervals. Even the DigiYatra project reportedly has provisions for audits and assessments by independent teams and certain government agencies, though it's unclear how this has been implemented.

What is the way forward?

There is a long history of auditing as an accountability mechanism, but limited precedents of auditing AI use cases. Consequently, there are few strategic starting points for auditors, possibly a steep learning curve, concerns about biases within auditing teams themselves, and a lack of access to necessary data. This issue is compounded by the emerging and general-purpose nature of AI technology, with uncertain definitions and wide variance among AI systems and solutions. Additionally, the AI regulatory ecosystem generally has few widely adopted standards. In a survey of AI auditing individuals and organisations, less than 1 percent described current regulation related to AI audits across geographies as “sufficient.”

Without clear audit practices as well as standards and regulatory guidance, any assertions about an AI product getting audited—whether by first, second, or third party auditors—will be difficult to verify and is likely to aggravate, rather than mitigate, harm and bias. To address this gap, the Indian government should establish legislation that requires operators and vendors of AI systems to engage in independent algorithmic audits against clearly defined standards. So instead of letting owners of AI products choose whether, when, and how to conduct audits, policymakers can enact requirements for them to submit audits and also develop compliance mechanisms to ensure that audits lead to real change.

Without clear audit practices as well as standards and regulatory guidance, any assertions about an AI product getting audited—whether by first, second, or third party auditors—will be difficult to verify and is likely to aggravate, rather than mitigate, harm and bias.

Additionally, the Indian government can mandate disclosure of key components of audit findings for peer review, which are often undisclosed by first- and second-party auditors, citing client confidentiality concerns. The mandated degree of disclosure (e.g., disclosing all details versus only key findings, etc.) can be evaluated based on domain-specific considerations. Disclosed information can be made public or logged into a database made accessible only to vetted actors via requests. Furthermore, lawmakers can initiate a standardised harm incident reporting and response mechanism to enable quantitative considerations (apart from only structural or qualitative) of real-world harm in the audit process. Finally, to grow and diversify the talent pool of capable auditors, the Indian government can formalise the evaluation and accreditation of algorithmic auditors. This should be done without making it a ‘rubber stamp’ process and without locking out independent researchers, investigative journalists, or other civil society organisations with experience (and motivation) to expose harmful AI actors. These measures could be included in the proposed Digital India Act, which would be coordinated with provisions for more algorithmic accountability in the Digital Personal Data Protection Act, 2023 or Information Technology Rules, 2021.

On their part, corporations or government entities that own and publicly operate AI products can establish best practices in their internal processes, such as involving stakeholders that will most likely be harmed by AI systems in the audit process, or notifying individuals when they are subject to algorithmic decision-making systems.

To grow and diversify the talent pool of capable auditors, the Indian government can formalise the evaluation and accreditation of algorithmic auditors.

There is significant consensus amongst practitioners in the algorithmic auditing ecosystem around all of these recommendations, and progress in accomplishing these will enable India’s regulatory entities and businesses to play a significant role in reducing harm. The audit process can be slow, boring, methodical, and meticulous—contrary to the current rapid pace of AI development. However, it might prove more beneficial to slow down as algorithms continue to get used across increasingly high-stakes sectors.


Husanjot Chahal is a Research Analyst at Georgetown University’s Center for Security and Emerging Technology (CSET). 

Samanvya Hooda is a Defense Analysis Research Assistant at the Center for Security and Emerging Technology (CSET)

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Husanjot Chahal

Husanjot Chahal

Husanjot Chahal is a Research Analyst at Georgetown Universitys Center for Security and Emerging Technology (CSET) where she is focused on producing data-driven research examining ...

Read More +
Samanvya Hooda

Samanvya Hooda

Samanvya Hooda is a Defense Analysis Research Assistant at the Center for Security and Emerging Technology (CSET) ...

Read More +