Image Source: Getty
This article is a part of the essay series: “The Freedom to Know: International Day for Universal Access to Information 2024”
Over the past decade, algorithms have become the invisible vehicles through which most online activities are carried out, from the operation of search engines to the functioning of social media platforms. Algorithms are also permeating the public sector with their use in urban planning, public resource allocation, and processing immigration applications among other uses. This development has been accompanied by digitally distributed information contributing to socio-political friction and, in extreme cases, to the eruption of violence. The Indian government along with governments around the world have realised the need to regulate online platforms due to their impact on social-political operations. Central to the regulatory scramble is the issue of algorithms and automated decision-making systems that deliver information to millions of screens globally. Individuals are traditionally seen as the primary decision-makers regarding the information they consume. However, algorithms and automated recommender systems now play a crucial—and almost invisible—role in distributing information available to users according to nondescript standards. Individuals using online platforms are largely unaware of such systems or unable to decipher how they function. Consequently, users exercise only limited control over the content presented to them. This “control asymmetry” is fostering a mistrust towards tech companies and public institutions by extension. To address this opacity and enable proper regulatory measures, it is crucial to first define what algorithmic transparency entails.
The Indian government along with governments around the world have realised the need to regulate online platforms due to their impact on social-political operations.
What is algorithmic transparency?
Algorithmic transparency occurs when algorithmic processes are visible and interpretable by users. Visibility allows users to observe information about the internal processes of a system that they might not be aware of in daily interactions. Furthermore, interpretability affords transparency if it gives non-experts insights into how the system works and the means by which it arrives at outcomes. Algorithmic decision-making systems work concurrently with recommender systems to distribute personalized content to users. This algorithmic logic can be extended to even argue that it reduces “individual users to a unique market” by “pursuing the logic of market segmentation.” As it stands, users often are unaware of how they are segmented into demographic subsets, that they are presented with only a small subset of the total available content or how they can alter the algorithm to prevent unwanted and unintended consequences.
The lack of algorithmic transparency can lead to the emergence of “folk theories” that explain algorithmic bias and general functioning as intentional outcomes of collusion between bad actors, whether they be state or non-state entities. This opacity erodes public confidence in the tech sector and public institutions as they try to take meaningful regulatory measures. Various regulatory efforts have attempted to hold online media platforms accountable over the years. Meta (then Facebook) was pressured into conducting an internal Human Rights Impact Assessment (HRIA) on the company’s role in the 2018 genocide in Myanmar. In 2020, Meta released another audit on its policies on topics such as electoral misinformation and medical misinformation during the pandemic among others. Another HRIA was commissioned in 2020 on Meta’s role in the proliferation of hate speech and incitement of violence in India. More recently, the US government has been attempting to ban the Chinese platform TikTok over allegations of algorithmic bias, manipulation and surveillance. In August 2024, the Supreme Federal Court of Brazil blocked X (formerly Twitter) from providing services in the country over the spread of misinformation on the platform. Also in September 2024, the founder of the messaging app Telegram was arrested in France and charged with alleged criminal activities on the platform. Although consequences of measures ranging from national bans and arrests remain unclear, it is apparent that algorithmic transparency has become an issue of significant import globally.
Various regulatory efforts have attempted to hold online media platforms accountable over the years.
Possible solutions
The problem of regulating online media platforms is complex, particularly in countries like India and the US which are not only the biggest markets for digital content but also have constitutionally protected free speech rights. Rather than a top-down content moderation approach, a useful method for balancing the regulation of algorithmic decision-making without government overreach may be implementing a bottom-up approach that mandates external audits and algorithmic transparency and explainability standards. Explanations of algorithms are broadly divided into “white-box” and “black-box” descriptions. White-box descriptions explain how a system processes and scores inputs to arrive at specific outcomes. They explain how a system employs its reasoning and data sources to create recommendations, thus, shedding light on outcomes the system is optimised towards. Black-box descriptions provide justifications for why a system produces certain outcomes and evaluate its motivations without explaining how or why a system behaves in certain ways.
An illustrative example of a white-box explanation is when Meta published a comprehensive guide to its algorithms in June 2023 explaining how various mechanisms on Meta platforms distribute content. It explained how the systems gather inventory, leverage and process signals, rank content and how users can adjust the algorithm according to their needs. The company also opened its internal data to external researchers and stakeholders to audit and test its algorithms. On the other hand, an example of black-box descriptions would be when X released a part of its source code to the public in March 2023 with a high-level explanation about how its recommender system works by prioritising certain user actions to filter and funnel content. However, experts noted that the company did not provide insight into how its larger AI models work behind the recommender system and that the company only chose a small dataset of 1500 tweets from a pool of many millions to explain. Furthermore, the company restricted external researchers from accessing its algorithms while downsizing its internal accountability teams. Regulators and policymakers will need to prioritise mandating the release of white-box descriptions by online media platforms to achieve desired outcomes in the short to medium term.
Internal audit reports are often diluted because the credibility of auditors, the objective of the audit, the methodology of enforcing policy changes and metrics to measure the successes and failures do not adhere to any predetermined or unanimously accepted standards.
Another method of regulating algorithmic decision-making is robust external audits. The previously mentioned 2018 and 2020 audit reports by Meta were criticised as being inconsequential as they did not commit to meaningful policy changes. Meta’s 2020 HRIA in India also met with criticism and was described as an attempt at “whitewashing” its audit report. Internal audit reports are often diluted because the credibility of auditors, the objective of the audit, the methodology of enforcing policy changes and metrics to measure the successes and failures do not adhere to any predetermined or unanimously accepted standards. Therefore, external audits will be necessary to ensure algorithmic transparency as they will more reliably “signal trustworthiness and compliance to external audiences.” Additionally, it has also been suggested that intermediaries can be employed in the auditing process that are not affiliated to either the audited company or the government. Independent intermediaries can act as a wall to prevent government overreach while being capable of accessing sensitive data provided by online media platforms. This bottom-up approach of mandating white-box explanations for algorithms along with robust audits may be useful in the short to medium term while policymakers and civil society groups explore more comprehensive and long-term methods of regulating algorithmic overreach.
Going forward
It is often the case that the invisibility of algorithms in the digital sphere makes them appear neutral. However, an algorithmic decision-making system is arguably closer to a “cognitively and socially reconstructed artefact” optimised for specific outcomes based on conscious and subconscious values. Such values can and need to be evaluated to make sure they align with societal expectations. Several steps can be taken to facilitate this alignment. First, policymakers, regulators, civil rights groups, and industry members should collaborate to develop standards and frameworks for auditing algorithms. Second, governments and the private sector can collaborate to establish independent intermediary agencies that can securely access data used by online media platforms while preventing government overreach. Third, companies that employ algorithmic decision-making systems can be categorised into low-impact and high-impact. Regulators may ask low-impact system developers (like online streaming services) to provide high-level descriptions of their algorithms supplemented with internal audits. Developers of high-impact systems such as those used in the public sector and large social media platforms should be mandated to provide comprehensive white-box descriptions of their algorithms and must be subject to external audits based on predetermined standards. Fourth, regulators should mandate social media platforms to include tools in the user interface of their systems that explain how user actions are scored and used for distributing information.
Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.