Author : Sauradeep Bag

Expert Speak Digital Frontiers
Published on Mar 16, 2024

A comprehensive and conscientious approach to ensure the ethical and considerate mainstream integration of AI technologies is imperative

AI and data: A tale of necessity and constraint

Data plays a foundational and indispensable role in artificial intelligence (AI) systems. It serves as the lifeblood that fuels machine learning algorithms, enabling them to learn patterns, make predictions, and generate insights. High-quality, diverse, and large-scale datasets are crucial for training AI models effectively. Through exposure to vast amounts of data, AI systems can identify correlations, extract features, and refine their decision-making processes. However, the collection of this data is often contentious, particularly when it is used to improve public service delivery.

Nations like China have integrated AI into governance, and others are following suit. Given AI's rapid development, strict guidelines are imperative.

The societal impact of AI presents numerous challenges. Building trust requires establishing frameworks, guidelines, and mechanisms to address these concerns. It is clear that understanding and preventing data misuse is critical. Nations like China have integrated AI into governance, and others are following suit. Given AI's rapid development, strict guidelines are imperative. Concerns include dystopian scenarios of AI merging with China's social credit system and the potential for private companies with extensive data access to do the same. Action is necessary, and a clear, united, global pathway focusing on data privacy and integrity is crucial.

Global trend

While much attention is rightly focused on the use of AI in authoritarian regimes, a recent report indicates a broad adoption of and experimentation with AI technologies worldwide. Countries with authoritarian regimes and limited political freedoms are investing significantly in AI surveillance technologies. Governments in regions such as the Gulf, East Asia, and South Asia are acquiring advanced analytic systems, facial recognition cameras, and sophisticated monitoring capabilities. However, even liberal democracies in Europe are swiftly adopting automated border controls, predictive policing, safe cities, and facial recognition systems. The Global Expansion of AI Surveillance Index indicates that 51 percent of advanced democracies employ AI surveillance systems. This contrasts with 37 percent usage in autocratic states, 41 percent in electoral autocratic states, and 41 percent in electoral democracies.

Governments in regions such as the Gulf, East Asia, and South Asia are acquiring advanced analytic systems, facial recognition cameras, and sophisticated monitoring capabilities.

India is actively involved in such initiatives, reportedly utilising AI technologies for facial recognition and smart city projects, drawing from both American and Chinese sources. Notably, Ahmedabad has become the first Indian city to employ AI for extensive surveillance and monitoring conducted by municipal corporations and the police throughout the city. A pivotal starting point for further analysis is to recognise China's significant role in this field.

AI Dragon

China leads in AI development, currently making significant strides and drawing considerable attention in the field. Chinese companies have made rapid advancements in AI research and commercialisation. This progress can be attributed to the growth of the entrepreneurial class, which has driven the development of large technology platforms in China over the past two decades, as well as the highly competitive business environment. International collaboration, including corporate investments, partnerships, and scholarly research, has also played a significant role.

Additionally, companies driving AI applications in China have benefited from targeted state support for investment and research. Despite sometimes setting overly ambitious targets, the Chinese government's approach has effectively mobilised resources towards specific industries. This environment has fostered a nimble commercial ecosystem, leading to increased innovation among Chinese AI firms.

China leads in AI development, currently making significant strides and drawing considerable attention in the field. Chinese companies have made rapid advancements in AI research and commercialisation.

However, a potentially alarming issue arises from China's extensive surveillance infrastructure and large population, enabling Chinese firms to access substantial amounts of data from government agencies, obtained through surveillance cameras and smart-city systems. This, combined with China's social credit system, could lead to hazardous and authoritarian consequences. Initially focused on financial creditworthiness like Western credit scores, the system expanded to include compliance and legal violations. Its ultimate goal is a unified, real-time monitored record for individuals, businesses, and the government.

The significance of data in driving AI advancement and innovation is undeniable, yet it raises ethical questions. In countries like China, citizens may lack control over data usage. However, the concept of enhancing public services through more efficient AI models is divisive. It's essential to approach this issue through a lens of fairness, necessitating critical evaluation of such systems irrespective of their country of origin.

New project, old concerns 

Privacy concerns have been on the rise in China, particularly with the implementation of two major laws last year: the Personal Information Protection Law and the Data Security Law. These laws have established China as having one of the world's most stringent data governance frameworks. Even before the widespread use of AI technology, Chinese public security organisations were proficient in tracking criminal suspects and regime dissidents. The use of facial recognition systems has further enhanced their surveillance capabilities. There is strong evidence to suggest that some of these systems are targeting individuals of Uyghur background in Xinjiang province. In response, the United States has imposed sanctions on several Chinese companies involved in developing facial recognition technology and providing it to public security organs. Consequently, several prominent Chinese AI firms are now included on the Department of Commerce's Bureau of Industry and Security's Entity List.

AI surveillance software was used on the London Underground to monitor passengers for criminal or unsafe behaviour, combining machine learning with live CCTV footage.

However, it's not only China that is exploring the convergence of AI and surveillance. For example, the United Kingdom (UK) recently conducted some experiments in this area. AI surveillance software was used on the London Underground to monitor passengers for criminal or unsafe behaviour, combining machine learning with live CCTV footage. Transport for London (TfL) conducted a trial of 11 algorithms at Willesden Green Tube station from October 2022 to September 2023. This represented TfL's inaugural integration of AI with live video feeds to provide real-time alerts to staff. Throughout the trial, more than 44,000 alerts were generated, with 19,000 successfully delivered to station staff.

During the trial at Willesden Green, which had 25,000 daily visitors before the COVID-19 pandemic, the AI system aimed to detect safety incidents to aid those in need. It also targeted criminal and antisocial behaviour, using AI models to identify wheelchairs, prams, vaping, unauthorised access, and individuals endangering themselves near train platforms.

Keeping AI in check 

Countries like China and the UK, along with many others worldwide, are integrating AI into various sectors, including governance, military applications, and pharmaceuticals. This trend is a natural progression as emerging technologies expand at an unprecedented rate, and their adoption seems imminent. However, it is crucial for key stakeholders, including governments, the private sector, and international regulatory bodies, to establish best practices and guidelines. These efforts are necessary to protect data privacy and integrity, as there are concerns about the rise of what some term “digital authoritarianism”.

The risks are evident, underscoring the need for oversight and regulation, a concern recognised globally. Significant strides have already been taken to tackle these challenges. The European Union's General Data Protection Regulation (GDPR) stands out as the most comprehensive privacy regulation globally. It covers data protection and privacy for individuals in the EU and the European Economic Area, granting substantial rights to data subjects. The GDPR imposes stringent obligations on data controllers and processors, mandating the implementation of data protection principles and adherence to strict standards in personal data handling.

Standardised practices and regulations are needed to address these challenges and ensure the reliability of AI systems in the face of data access issues.

Continued access to data is crucial for developing AI. Machine learning systems require ongoing training and retraining to ensure accurate results. However, changes in governance or regulations can lead to data no longer being available, impacting AI performance and eroding trust in the system. Standardised practices and regulations are needed to address these challenges and ensure the reliability of AI systems in the face of data access issues.

Principled innovation

AI uses machine learning algorithms to analyse data, make autonomous decisions, and adapt to new information without human input. It's pervasive across sectors like healthcare, fashion, finance, and agriculture, posing privacy challenges. Privacy must be a top priority, as highlighted in the UK trial mentioned earlier. Experts reviewing the documents have raised concerns about the accuracy of object detection algorithms and the lack of clarity regarding public awareness of the trial. They also warn against the possible expansion of surveillance systems to incorporate more sophisticated detection methods or facial recognition software. Researchers from the Ada Lovelace Institute stress that while the trial did not include facial recognition, using AI in public spaces to analyse behaviours and infer protected characteristics raises similar scientific, ethical, legal, and societal questions as facial recognition technologies.

AI uses machine learning algorithms to analyse data, make autonomous decisions, and adapt to new information without human input.

It's essential to consider mistakes and malfunctions in AI systems. For example, during the UK trial, AI errors included flagging children following their parents through ticket barriers as potential fare dodgers and misidentifying a folding bike as a non-folding one. Additionally, police officers participated in the trial by displaying a machete and a gun in view of CCTV cameras while the station was closed, aiming to improve the system's weapon detection capabilities.

The proliferation of data-driven technologies, particularly in AI, has elicited heightened apprehension among individuals regarding the methods employed for data collection and utilisation. In an era where data serves as a foundational asset, pivotal for fuelling AI models, concerns have arisen regarding the potential trade-offs between data utility and individual privacy. While the accumulation of extensive datasets offers notable benefits for AI advancements, it concurrently raises pertinent questions regarding the erosion of personal privacy boundaries. Thus, there exists a pressing imperative for governments to navigate a delicate equilibrium, leveraging data to optimise public service delivery while upholding the sanctity of individual privacy rights. Employing techniques such as anonymisation and data isolation emerges as essential strategies to mitigate privacy risks in the era of burgeoning AI integration. Addressing these challenges mandates a comprehensive and conscientious approach to ensure the ethical and considerate mainstream integration of AI technologies.


Sauradeep Bag is an Associate Fellow at the Observer Research Foundation

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Sauradeep Bag

Sauradeep Bag

Sauradeep is an Associate Fellow at the Centre for Security, Strategy, and Technology at the Observer Research Foundation. His experience spans the startup ecosystem, impact ...

Read More +