Expert Speak Young Voices
Published on Jun 16, 2023
India should focus on retooling its approach to AI to improve the lives of citizens without infringing on fundamental rights
Too big to fail: Shortcomings of large-scale AI Artificial Intelligence (AI) has been the big buzzword of the past few years, and based on the speed that new AI capabilities are being developed, the trend is not ending anytime soon. For governments, AI presents an opportunity to optimise existing processes and improve the lives of millions. By collecting vast amounts of information about their population, known as big data, some governments believe they can create AI networks capable of fixing many of society’s problems. After hopes that India’s Aadhaar registry could provide a starting point for a wide-reaching database of Indian citizens were dashed by the Supreme Court’s ruling on permitted uses for Aadhaar in 2017, states like Telangana have attempted to skirt these rules by collecting data without relying on Aadhaar. Instead of looking for loopholes to continue the pursuit of big data, India should focus on retooling its approach to AI by identifying and addressing specific situations where AI can be useful without compromising citizens’ fundamental rights.

Privacy precedents 

In 2017, the Indian Supreme Court concluded a major ruling on Aadhaar in Justice K.S. Puttaswamy v Union of India. The case was an amalgamation of complaints about the Aadhaar programme, mostly about its infringement on Article 21 of the Indian Constitution, which guarantees the protection of life and personal liberty. The court ruled that privacy was a fundamental right under Article 21, but it also ruled that Aadhaar was constitutional. It did so on the condition that Aadhaar had certain protections built into its conception, including restrictions on the types of demographic data that could be collected as part of the Aadhaar enrolment and limitations on its integration with other systems to prevent its use as a vehicle for government surveillance. With this decision, the foundations of Aadhaar were permitted, but officials’ hopes of broader uses for Aadhaar were put on hold indefinitely. Although this case was a direct response to the Aadhaar programme, it has been a landmark case that reinforced Indians’ right to privacy and has ramifications beyond Aadhaar.
By collecting vast amounts of information about their population, known as big data, some governments believe they can create AI networks capable of fixing many of society’s problems.

Testing the limits in Telangana

To engage in high-level data collection without violating the legal use restrictions on Aadhaar, the Samagra Vedika programme used AI to combine information across pre-existing databases into a unique profile for each individual. By cross-checking similar names and addresses, officials built profiles that included information about an individual’s utilities, property ownership, and history of receiving welfare benefits. The government then used this information to determine eligibility for future welfare schemes, leading to the cancellation of 100,000 ration cards. Public outcry eventually led to the reinstatement of 14,000 cancelled ration cards. This episode raised questions about the ethics of big data and the accuracy of the Samagra Vedika model. Although Samagra Vedika adheres to the letter of the law in Justice K.S. Puttaswamy v Union of India by avoiding the use of Aadhaar specifically, it violates the spirit of the judgement set forth by the Supreme Court regarding constitutional privacy. The Telangana government has counter-argued that the Samagra Vedika programme does not violate citizens’ privacy because it relies on data that individual government departments have already collected for their typical operations. However, in the Supreme Court order, the justices specifically state that the integration of separate datasets, referred to as “silos”, is a cause for concern in and of itself: “When Aadhaar is seeded into every database, it becomes a bridge across discreet data silos, which allows anyone with access to this information to re-construct a profile of an individual’s life. This contradicts the right to privacy and poses severe threats due to potential surveillance.” Besides the data that is collected, it is the centralisation that constitutes a violation of citizens’ right to privacy by giving the state a disproportionate amount of information about their daily activities. Although the Supreme Court case was judged on the specific merits of Aadhaar, the underlying sentiment is true regardless of whether any project uses Aadhaar or some other method for its data collection. In this context, any attempt at an all-encompassing AI model for governance must be reconsidered because of its inherent constitutional violations and intrinsic risk of misuse for state surveillance of its own citizens.
To engage in high-level data collection without violating the legal use restrictions on Aadhaar, the Samagra Vedika programme used AI to combine information across pre-existing databases into a unique profile for each individual.
This does not mean that AI for governance should be wholly discarded. Instead, India must look to examples where specific issues have been solved with limited-use AI. Compared to the broader approach outlined above, limited-use AI allows governments to harness emerging technologies for good without the need to monitor every aspect of their citizens’ lives.

Smaller AI, bigger impact

Education is one such area where limited-use AI could solve pressing issues. Using AI to improve Early Warning Systems (EWS) for students at risk of dropping out has proved to be effective in both developed and developing countries. These systems collect data about students’ academic performance, attendance, and family circumstances to identify at-risk students and direct administrators to provide targeted support. Although EWS originated in the United States, where dropout rates are relatively low, work in countries with high dropout rates has also seen promising results. Researchers have adapted predictive models in Guatemala and Honduras, to correctly identify 80 percent of students who dropped out between grades six and seven, a major transition year in the Central American school system. Similar efforts are also underway in the Mexican state of Guanajuato. While the Delhi government has implemented a relatively successful rudimentary EWS model, their predictions rely solely on tracking student attendance. Incorporating AI into Indian EWS would allow schools to account for more variables that contribute to dropping out and make more nuanced predictions about student behaviour. Once collected, the data used by these models can remain in its “silo”, allowing schools to help students but preserve privacy for the population overall.
Using AI to improve Early Warning Systems (EWS) for students at risk of dropping out has proved to be effective in both developed and developing countries.
Similar examples are apparent across locations and applications. In Singapore, public transportation services have worked with Japanese firm, NEC, to predict whether a bus driver will crash in the next three months and recommend them for additional training sessions. In the US, the Bureau of Labor Statistics is using AI to analyse occupational injury reports to relieve the burden of tedious tasks and reassign employees to more complex problems. These scenarios are not the flashy, all-knowing AI that people often expect from science fiction movies and punchy news headlines. Instead, these programmes set aside aesthetics to target problem areas and provide effective solutions.

Looking ahead

It is tempting to try and make every neural network bigger than the last. AI is a hot topic for governance, and it is easy for officials with good intentions to be overwhelmed by technology they do not fully understand. This temptation is even greater in India, where Aadhaar could theoretically provide an advantageous starting point for widespread data integration. Despite the promises of big AI, the government must carefully interrogate how any possible benefits would destroy Indian citizens’ reasonable expectations of privacy that are integral to maintaining a healthy democracy. By scaling down instead of scaling up, governments can reap the benefits of AI through proven, targeted methods that improve the lives of citizens without the need for total surveillance.
Jenna Stephenson is an intern with the Geoeconomics Programme at the Observer Research Foundation
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.