Expert Speak Digital Frontiers
Published on Oct 17, 2021
Responsible and ethical AI is premised on an infrastructure of trust; it is imperative that accountability and governance frameworks establish transparency, and checks and balances, respectively
Facial recognition in law enforcement is the litmus test for India’s commitment to “Responsible AI for All”

In 2019, the National Crime Records Bureau (NCRB) issued a public Request for Proposal (RFP) to establish a nationwide Automated Facial Recognition System (AFRS). A legal notice issued to the NCRB, seeking recall and cancellation of this RFP, was responded to in some sense. The RFP was recalled, but not cancelled; it was simply replaced in June last year with a revised RFP. One of the most disconcerting aspects of both RFPs has been the arbitrariness with which this action is being pursued. As of date, Indian law is devoid of any comprehensive legislation that authorises, regulates, and determines the evidentiary value of automated facial recognition technologies (AFRTs) within our domestic law enforcement processes and the larger criminal justice system. Add to this the fact that in terms of evidence-based decision-making, the espousal of AFRT is arguably driven by a technocratic belief in better, more efficient systems. However, there are no objective measures as to how such efficiency is evaluated, nor what the tradeoff is in terms of rights and liberties, and due process norms. It is also pertinent to mention that AFRTs are not only being pursued at the national level but are, in fact, already deployed by several state police forces in some form or are in the process of acquiring it.

At the state level, the processes and legitimacy are even more questionable. In an extensive information gathering exercise, over the past six months, under the Right to Information Act, the author focused on raising inquiries around AFRT programmes of four police forces reportedly deploying such tech. These were Punjab, Delhi, Hyderabad, and Mumbai. On inquiring about how the police decided to use these technologies, or if there were internal office memos or notifications that shed light on this decision-making process, the author received no inputs. Delhi Police was the only authority that at least partially furnished some inputs, whereas Mumbai and Punjab were completely non-responsive. This is despite the fact that, for instance, Punjab has reportedly been using its Punjab Artificial Intelligence System (PAIS) system for a few years now, without any visible or publicly accessible oversight mechanisms.

Indian law is devoid of any comprehensive legislation that authorises, regulates, and determines the evidentiary value of automated facial recognition technologies (AFRTs) within our domestic law enforcement processes and the larger criminal justice system.

What became evident is the inherent lack of transparency and accountability around how these systems are being designed and deployed. This is despite the fact that notable international researchers and AI ethicists have assailed the accuracy and efficacy of AFRTs as a technological intervention in policing. For India, particularly, the fact that these tech interventions are being integrated into an already complex, and arguably biased criminal justice system, with little to no scrutiny, is a direct contravention to its ideas of responsible AI. Responsible and ethical AI is premised on an infrastructure of trust, and for such an ecosystem to foster, it is imperative that accountability and governance frameworks establish transparency, and checks and balances, respectively.

In addition to the opacity, there are constitutional and legal challenges posed by the use of AFRTs in Indian law enforcement. The principles for responsible AI, which were published earlier this year by NITI Aayog, and apparently manifest the governments’ commitment to safe and ethical AI usage, categorically adopt the doctrine of “constitutional morality”. This requires any use of AI in India to safeguard, inter alia, the rights and freedoms afforded to people under the Constitution.

However, the use of AFRT has three patent concerns in this regard. First, it affords the police, and other state agencies, a sophisticated system for targeting certain individuals. Take, for instance, the increasing reported cases where some form of AFRTs are witnessing a scope and function creep. In Delhi, the technology—which was being used for locating missing children—was also reportedly targetting anti-CAA protestors at Shaheen Bagh. Similar instances of identifying protestors also emerged from Hong Kong in the pro-democracy protests in 2019 and the Black Lives Matter movement in the US last year. While this may be necessary to thwart nefarious designs compromising public order, they can also be used to suppress dissent. Paired with the aforementioned absence of accountability measures, this danger of stifling dissent and opposition is real and considerable.

Second, there is the underlying privacy concern that looms large in any kind of state surveillance. The recent furore around the reported use of Pegasus clearly indicates the potential of abuse of state-sanctioned surveillance activities against private citizens. With AFRTs, there is an inherent risk of sensitive biometric and personal data of individuals being used to create the underlying algorithm. Add to that the scope of constant surveillance that it creates as a digital ecosystem; the technology poses a serious impediment to individual privacy. In fact, the dangers to privacy are accentuated by the absence of a governing data protection law, though it should be added that the draft Personal Data Protection Bill, 2019, gave some broad exemptions to the government under the guise of investigatory powers. So even if the proposed legislation is implemented, its actual effectiveness in regulating the use of AFRTs or similar surveillance technologies is highly questionable.

The principles for responsible AI, which were published earlier this year by NITI Aayog, and apparently manifest the governments’ commitment to safe and ethical AI usage, categorically adopt the doctrine of “constitutional morality”.

The third and final constitutional challenge posed by the use of AFRTs in law enforcement, is the potential vitiation of due process, as guaranteed under Article 21 of the Constitution. Due process has been read as inclusive of procedural safeguards, and substantive provisions which counter arbitrariness. However, the current modus operandi with which AFRTs are being integrated in policing and law enforcement, and the opacity around such decisions, exhibit significant red flags from a due process standpoint. The role, usage, and remit of AFRTs is apparently completely under executive control with no visible oversight, or even informal public scrutiny. To rely on such technologies in criminal investigations, or even pre-emptive surveillance to thwart more serious dangers to “national security” requires, at the least, a robust system of checks and balances.

In addition to these issues, there is also the question of liability for AFRTs, which may go awry or misidentify individuals. Practically speaking, being a part of the policing system, such risks can realistically undermine the liberty and freedom of the state’s citizenry. In fact, such cases are not academic hypotheticals, but have surfaced in the US, particularly with respect to people of colour. When such an incident happens, who is to be held responsible? The state (or its agencies) deploying such tech, or the developers who may be held responsible vicariously for the technical deficits of their creation. These questions are vital and must be addressed before AFRTs take a deeper root into our policing and law enforcement systems, lest we recede into the cliché of the “law playing catch-up to technology”.

With most other technological endorsements, the increasing use of AFRTs in law enforcement are likely a manifestation of an innate automation bias, which establishes that anything automated is a betterment of its analog version.

Lastly, as with most other technological endorsements, the increasing use of AFRTs in law enforcement are likely a manifestation of an innate automation bias, which establishes that anything automated is a betterment of its analog version. However, if efficacy (and not efficiency) of these systems is to be assessed, it is not a question of merely expeditious processes. Instead, a lot more thought needs to be put into evaluating the merit of technological integration in an objective, holistic sense, to determine not only its implication for law enforcement agencies, but its larger integration into our societies. This comprehensive determination would be pivotal in assessing whether this contentious technology actually stands the test of responsible and ethical AI that India is committing itself to.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Ameen Jauhar

Ameen Jauhar

Ameen Jauhar is a senior resident fellow at the Vidhi Centre for Legal Policy and leads its Centre for Applied Law &amp: Tech Research (ALTR). ...

Read More +