Expert Speak Digital Frontiers
Published on Oct 19, 2021
How do we ensure that a user know what they are consenting to when the calculus between short-term gain of using the service against any harm due to loss of privacy, in the long term, may not be immediately obvious?
When Algorithms decide and regulating those decisions

Contrary to popular belief, the 1800’s Luddite Revolution was not just a fight against progress, but one for agency. Highly skilled weavers, including women, fought for autonomy and labour rights amidst the fear of their employment and agency being snatched away by machines. This new technology threatened to tip the balance of power in favour of the textile-mill-owning elites who controlled the means of production. Today, these old technology gods have been replaced by newer algorithmic ones. As Artificial Intelligence (AI)-powered machines take control of our decisions, it might already be too late to take back control of what makes us human—free will and autonomy.

Be it users or citizens, almost all behaviours performed have a digital fingerprint. In the age of surveillance capitalism, the fight for free will is a losing battle. In a post-Cambridge Analytica world, algorithms challenge the bedrock of individual freedom and choices; preserving human choice would need to go beyond mere consent and focus on accountable regulatory mechanisms.

Functional creeps, monetising behaviours 

Human behaviour is not always rational, it is constrained by bounded rationality. Ideally, machines can fill in the gaps, reduce biases, and aid human decision-making. However, Big Tech often monetises the fallacies of human decision-making. Cambridge Analytica was made possible by commoditising preferences of citizens and manipulation of consent and human decision-making online. The adage “if something is free, you’re the product” rings true.

Even though users may be concerned about the privacy of their data, their behaviour online often doesn’t reflect that. This is the privacy paradox. This is partly because the power dynamics of modern-day online decision-making is not in favour of the individuals. Take, for instance, how the terms and conditions of online applications are lengthy and jargony documents that make little sense for common users. Such documents are deliberately made hard to read and are meant to give an illusion of choice. Even when these are simplified, users may still be unable to rationally weigh the risks and benefits of such privacy decisions. The calculus between short-term gain of using the service against any harm due to loss of privacy, in the long term, may not be immediately obvious.

Big Tech often monetises the fallacies of human decision-making. Cambridge Analytica was made possible by commoditising preferences of citizens and manipulation of consent and human decision-making online.

The issue of privacy forces actors to make trade-offs at every level. The state might view it in its advantage to gather information in a cost-effective manner with structures and algorithms to track indicators of interest. However, the cost of collecting such data goes up exponentially if citizens are given the universal ability to control which information is shared. Similarly, providers face a trade-off between letting go of control or retaining control over information of consumers. In the absence of clear regulation and privacy protection, information control can be weaponised, becoming detrimental to users' preferences.

Flaw in the machine 

It is also now abundantly clear that algorithms are often wrong. It is no wonder that present-day algorithms are rife with bias and in conflict with rights of minorities. Essentially, biases like those of race and gender existing offline get encoded digitally. Timnit Gebru, who was fired from Google, flagged institutional and structural gender bias within algorithms through her research. In one of her papers, she shows how facial recognition softwares had error rates of more than 34 percent for darker-skinned women and only 0.8 percent for light-skinned men.

Such political realities within technology will only further perpetuate cognitive and behavioural biases.  When imported and applied to developing countries, it is not difficult to see why such technologies could backfire and shift costs to the marginalised. A recent study by the behavioural scientist, Sendhil Mullainathan, showed how bias is rife in hospital AI algorithms against black patients. The study conclusively showcased how an algorithm used to determine assignment of patients for complex treatments was much more likely to prefer white patients over black patients. Such a racial bias potentially impacted millions of hospital patients. With the application of algorithms expanding, there might be many more such instances.

Digital divides, biases unite 

The Centre for Social Behavioural Change ran experiments amongst 10,000 Indian and Kenyan participants on whether nudges can make consent more informed or whether they can help change consumers’ behaviour around privacy decisions. We found that strong nudges, which might require regulatory mandates, work better than others. These include star ratings, which rate providers’ privacy policy or cool down periods which necessitates the user to pause before making a decision on privacy on a platform. Regulating default privacy settings also work because, more often than not, users stick to the default. On the other hand, we found that just giving information to users on being more mindful or simplifying a privacy policy itself are less effective approaches. Our results pointed to the fact that free-market or self-regulation by consumers based on their good judgement may not help protect privacy. This leaves us to argue for the role of privacy-protecting regulatory frameworks that can impose such interventions.

A recent study by the behavioural scientist, Sendhil Mullainathan, showed how bias is rife in hospital AI algorithms against black patients. The study conclusively showcased how an algorithm used to determine assignment of patients for complex treatments was much more likely to prefer white patients over black patients.

Role of regulations 

In the fight for control over our online choices with algorithms taking over, accountable and democratic institutions online can help safeguard consumers. Fundamental to the challenge of our new sites of conflict is the need for regulations that can match pace with the advances of technology. The current system of consent for protection against the dark side of algorithms is rendered ineffective as existing structures make individual users powerless. Given the scale and complexity, it may even imply deploying a good AI with checks and balances to weed out bias in the effort to stop a bad AI.

The pathway to resolve machine-human conflict is to evoke adaptive regulatory controls online. Such controls would need to not only factor in behavioural perspectives but also take into account a human rights-based approach to increase accountability. Much like the General Data Protection Regulation (GDPR), the European Union (EU) has already begun the groundwork for regulating and debias-ing AI. The mechanical ship of Theseus may be built from AI algorithms instead of weaving machines, but essentially it will remain a creeping Trojan horse subverting autonomy and choice.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributors

Pooja Haldea

Pooja Haldea

Pooja Haldea is an expert in applied behaviour science. She is a Senior Advisor at the Centre for Social and Behaviour Change Ashoka University and ...

Read More +
Saksham

Saksham

Saksham is a researcher in behavioural economics with the Centre for Social and Behaviour Change with interests in decision-making for privacy and health.

Read More +