Expert Speak Digital Frontiers
Published on Sep 09, 2019
India needs to bring an algorithm transparency bill to combat bias

Imagine the following disconcerting scenarios. A Muslim man applies for a personal loan but is repeatedly denied by the lending company despite his solid economic background. A Dalit consistently gets a longer sentence for the same crime than a non-Dalit co-conspirator.  A poor neighborhood keeps finding its young men wrongly booked for crime. A woman’s resume keeps getting ignored on a job search website despite better qualifications than fellow male applicants.  Housing applications from a community or people with certain sexual orientation are never approved for a housing society dominated by a different community.

Unfortunately, while the above scenarios play out in India even today, at least occasionally, these scenarios of religious, ethic, gender, or sexual discrimination may become even more common in future if the increasing use of computer technology for making decisions about lending, law enforcement, recruitment, and judiciary is not accompanied by appropriate guards against algorithmic bias.

The key computer technology used for above decisions, machine learning, often takes past data as input to construct computer models which can then be used to make decisions. Pre-existing human biases may creep in at different stages – framing of the problem, selection and preparation of input data, tuning of model parameters and weights, interpretation of the model outputs, etc. - either intentionally or unintentionally making the algorithms for decision-making biased. In a notorious example, St George Medical Hospital in Britain denied entry to women and men with “foreign-sounding names” since their computer models tried to replicate historical trends in admission that were biased against women and men of non-European origin. As another notorious example, Amazon had to turn off an online system for screening job applications when it was found out that their system repeatedly deprioritized women since historical data suggested that more men are eventually hired.  We must worry in Indian context that the large number of biases in our society – often reflected in both public and private policies and decisions - will impact the decisions made by the computer models. This is especially true considering the lack of religious, ethnic, gender, and sexual diversity in positions that are either responsible for or directly influence the design and implementation of these computer models and the associated deployment in real-world applications.

In addition to absorbing human biases, computer models may become biased also due to incomplete or unrepresentative data. A criminal database of facial images dominated by a certain gender, a set of facial features, headgear, or skin color has high likelihood of false matches for a person of that gender, color, or features. Similarly, misclassifications may be common for religious, ethnic, or caste groups that were not represented adequately in the input data. These model limitations create biases against groups that were photographed or targeted for surveillance more frequently than others, or those groups that are over-represented in existing databases.

Computer models not only replicate human biases, but also propagate and magnify them. As one example, consider the current discriminatory laws against same sex couples. Computer models today must abide by existing laws and, therefore, replicate the societal bias. However, these models may continue to be used in future and propagate the bias even after the society makes the laws more equitable. The continued use may be due to oversight, lack of transparency (due to trade secrets, for example), or complexity of the computer application that the model is embedded in. Also, the same model may be used in several applications (reuse is quite common in software industry) magnifying the impact of a bias. Perhaps worse of all, computer models can make decisions at a scale humans do not – amplifying the biases. Consider mass surveillance – a computer model systemically biased against a religious group may have far more devastating consequences than a few bigoted individuals.

The problem gets exacerbated due to algorithmic authority – people put too much trust in computer output. This allows especially the malicious and the mischievous to propagate and replicate intentional bias. It also discourages the victims of the bias from seeking redressal.

Another challenge is that ethical frameworks around machine learning and artificial intelligence have yet to be finalized, let alone be legislated. This makes it unclear what the right tradeoffs are between fairness and accuracy during the design and use of the computer models – economic incentives push the model designers and implementors to lean towards accuracy.

 So, what must we do? The Srikrishna Committee-proposed draft Personal Data Protection Bill, 2018 provides rights to access and confirm personal data. However, it does not require computer model decisions to be explainable. SPDI rules do not cover algorithmic bias either. India needs to bring an algorithm transparency bill with the following broad contours. Algorithms and data must be externally audited for bias and made available for public scrutiny whenever possible. Workplace must be made more diverse to detect and prevent blind spots. Cognitive bias training must be required. Regulations must be relaxed to allow use of sensitive data to detect and alleviate bias. Effort should be made to enhance algorithm literacy among users. Research on algorithmic techniques for reducing human bias in models should be encouraged. Models must be re-evaluated and tuned, if needed, when applied in a new social context (Maharashtra vs Bihar, for example). Europe now prohibits solely automated decisions in cases where there could be significant or legal impact on the individual, and a right to human-in-the-loop and a non-binding right to explanation exists in all other cases. Policy makers, industry, and civil society must debate if an equivalent framework is appropriate for India.  At the least, there should be required minimum human involvement in the design and evaluation of a computer model.  Algorithm Accountability Act introduced in the US Congress in April 2019 and Algorithm Transparency Bill passed by the New York City Council in 2017 are other model documents – however, they are much less detailed and prescriptive than the European guidelines.

In addition, existing laws that prevent different types of discrimination must be amended to clarify how these laws apply in the digital space. The clarification will define the guardrails that get triggered when a computer model leads directly to a legally recognized discrimination. At the same time, safe harbors should also be defined so that computer models could be developed and deployed under some certainty.

Computer models are already being used for law enforcement in India. Maharashtra and Delhi are following predictive policing practices - several other states are expected to follow suit. States such as Rajasthan, Punjab, and Uttarakhand are using facial recognition software to provide a match for digitized criminal records. National Crime and Record Bureau and Ministry of Civil Aviation also have specific plans to use facial recognition. Similarly, Indian fintech startups are already providing instant loans based on computer models.  There are also Indian startups using computer models to sort resumes and streamline recruitment. Computer models are currently not used for judicial decision making in India. However, such models already popular in several countries– it is only a matter of time before they start getting used in India. It is time for government should step in and regulate these models so that they do not deepen the divisions in the society or increase discrimination.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Rakesh Kumar

Rakesh Kumar

Rakesh Kumar is an associate professor in the electrical and computer engineering department University of IllinoisUrbana-Champaign. His research is in computer systems energy efficiency and ...

Read More +