- Aug 25 2017
If the purpose of privacy is to preserve democracy, then data protection laws must reflect this purpose.
As we celebrate the Supreme Court’s unanimous verdict on privacy being a fundamental right, it is worth exploring the changing technological landscape within which this right will play out.
While a constitutional right to privacy will undoubtedly limit the states intrusive power over individuals, the full potential of this right will only be realised by data protection laws that will govern how private companies collect and use data.
Today, the commodification of data and new manipulative technologies used by companies will exacerbate the challenges to privacy. However, existing information privacy laws are currently unsuitable to tackle this challenge. The judgment will need to reinvigorate a conversation on how today’s data protection models will need to be reconsidered in light of the changing dynamics of information privacy.
Data is a commodity
Since the widespread adoption of the internet, data has come to represent the single most valuable commodity evidenced by the fact that some of the world’s most valuable companies have built their enterprise around the collection and use of data. The Supreme Court acknowledges this reality — finding that today’s world is characterised by “ubiquitous dataveillance”.
Earlier, this data would provide personalised advertisements and improve services. Eventually, powerful platforms such as Facebook and Google can use this data to shape our understanding of the world and affect our choices.
Having recognised this risk, the court calls upon the state to enact a stringent data protection law taking into account the principles of consent, notice and purpose limitation enumerated in the A.P. Shah Committee report.
However, critics argue that this model of information privacy self management will only allow data to proliferate further, because personal information is treated as a commodity, capable or being sold and exchanged. In an ‘internet of things’ world, social media, tailored products, innovation, national security and a long list of imperatives for which private information must be exchanged will continuously require this barter, rendering any control meaningless.
Once a sufficient number of people have independently ‘consented’ to exchange personal information in return for services and efficiency, the resulting social structure can no longer sustain a culture of privacy, even for those who withhold consent.
Eventually, this information will provide the fuel for algorithms and artificial intelligence which will increasingly mediate the world around us.
These algorithms are dynamic, seamless and unobtrusive, distinct from an overwhelming ‘big brother’ presence, making it increasingly difficult to identify what individuals are consenting to. Similarly, scholars are coming to terms with the fact that the nature of big data itself renders purpose limitation — as it is understood today — meaningless, with the benefits accruing only through interconnected and analysable data sets.
As early as in 2000, Harvard professor Lawrence Lessig warned that “Left to itself.. cyberspace will become a perfect tool of control.” The day is not far when our entire lives are governed covertly by the logic and dicta of powerful and incomprehensible algorithms.
Technology is manipulative
Two interrelated trends are making this possible; the first trend pertains to the objective of personalisation. Eric Schmidt once claimed that “The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow?’ and ‘What job shall I take?’” Today that goal is within reach — Google can determine which news you are more likely to read, Amazon can direct you to goods and services you might already want and Facebook can tell who your closest friends are.
This is the bargain current privacy models allow us to make — to trade personal information in real time for tailored news, music, books, restaurant recommendations and a host of other services. As long as this personal information is not used to harm an individual in their interactions with other people or the state, they are satisfied with this exchange.
Second, increasingly business models are shifting from monetising data to structuring behaviour to intervene and modify it for profit. The force multiplier behind this trend is the proliferation of an ‘internet of things’ ecosystem which acts as the ‘eyes and ears’ for tech companies and governments.
Recent studies have confirmed that uninterpretable algorithmic processes coupled with vast access to personal information is increasingly making it possible for technologies to coordinate users towards specific goals; which are invariably motivated by profit.
For example, in 2014, Facebook manipulated the news feed of close to 7,00,000 users to test if exposure to different emotions led people to alter their own posting behaviour, claiming that it was only attempting “to improve our services and to make the content people see on Facebook as relevant and engaging as possible.” Similarly, Uber has employed a vast array of psychological tools to entice its employees to drive longer — such as alerting the driver to the next potential customer even before the first ride is completed.
Former Google ethicist Triston Harris frames the risk succinctly: “If you control the menu, you control the choices.” And if you control the choices you control behaviour.
A social order is based on shared social realties — hyper personalisation and behavioural manipulation alters these realities, often with adverse political consequences. One manner in which this has already become a reality is evident from the level of political polarisation witnessed on social media. Echo chambers and filter bubbles are primarily a result of algorithms tailoring news and opinions they predict individuals will share or view.
Similarly, Eli Parser cities an example of two individuals searching for the term ‘Egypt’ — one individual was shown results relating to the Arab Spring, while the other was shown vacation options. Clearly then, when we allow algorithms to tailor our lives, specific to our needs and biases, we lose out on what traditional public forums offered — a mix of randomness and unpredictability, which stimulates independent thought and allows individuals to mature socially and politically.
Surveillance is the new normal
What we are witnessing then is the manipulation of decision making, the very sphere that privacy seeks to protect. At best, this manipulation might keep you on Facebook longer or might nudge you to view a certain Netflix show. More worryingly, insurance companies might change premiums in real time depending on what you eat every day or how fast you drive, invariably exercising subtle control over individual behaviour and habits in pursuit of a corporate agenda and not a health and safety one.
Most importantly however, this is a threat to democracy. Individuals do not simply make consumer choices, they also make political ones. For example, Donald Trump’s 2016 campaign employed Cambridge Analytica, a firm which proudly advertises that it “uses data to change audience behavior.” This power allowed Trump to tailor political messages to individuals, feeding into their cognitive biases about specific issues and limiting how they were informed.
Consequently, surveillance and control are no longer the preserve of an Orwellian state. What we are witnessing is the rise of a distinct, seemingly democratic, form of surveillance where individuals willingly exchange data and allow themselves to be subjected to monitoring and algorithmic processes in exchange for the benefits of digital tools and services — a trend some refer to as ‘surveillance capitalism’.
Recognising these risks, British economist Kenneth Boulding once said, “A world of unseen dictatorship is conceivable, still using the forms of democratic government.”
Therefore, the construction of social preferences, values and desires is increasingly dictated not by public opinion or personal awareness, but by opaque programmes. The algorithmic analysis of the vast troves of data which individuals willingly provide is systematically designed to mould their choice environment and to alter an individual’s understanding of the world.
In this way, technology increasingly becomes a form of innocuous social control. As Sipros Simitis presciently observed in the 1980s, “processing [data] increasingly appears as the ideal means to adapt an individual to a predetermined, standardised behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.”
Privacy is a means to preserve democracy
Enumerating on the nature of privacy, the court found that it “postulates the reservation of a private space for the individual, described as the right to be let alone. The concept is founded on the autonomy of the individual. The ability of an individual to make choices lies at the core of the human personality.”
Today’s data protection laws recognise this in a circular way. If the end goal of information privacy laws is to give individuals more control over our personal information and every day we ‘consent’ to trading this information, have we truly preserved either our personal information or our space to make autonomous decisions?
If the purpose of privacy is to preserve democracy, then data protection laws must reflect this purpose. The Supreme Court has done well to lay down the groundwork, stating that privacy must “enable individuals to preserve their beliefs, thoughts, expressions, ideas, ideologies, preferences and choices against societal demands of homogeneity.”
Considering the pervasive influence of technology and the adverse implications for democracy, it is clear that the principles of consent, notice and purpose limitation need to be reframed in order to take into account and limit the ability of companies and government to influence our norms, preferences, behaviour and choice.
This commentary originally appeared in The Wire.