Expert Speak Digital Frontiers
Published on Oct 19, 2019
The internet is not on fire

Critiques of the internet often point to the techno-utopian promises of its early days. The “free-flow” of information and glorious, unrestricted connectedness were supposed to change the world for the better. However, what they have done falls somewhat short of the ideal. It remains prudent to believe that new technology can generally make things better for individuals. Simultaneously, it is foolhardy to believe it would solve all our problems and lead to utopia. It is always likelier that this connectedness would do what any other technological transformation has always done: resolve some of the old trade-offs, but also introduce entirely new trade-offs.

It takes some time to fully understand the cost of progress. Things are not as bleak as framed in the eight trends outlined below. Moreover, some of these are a case of “damned if you do and damned if you don’t”- i.e. either choice can and will be criticized by someone. For example, we simultaneously want an internet that promotes free speech where anyone has a voice, yet we protest when people on the fringe express their voices. The nature of democratic dialogue is that any amount of censorship will be criticised by someone, including no censorship at all. Therefore, the question is not whether there should or should not be censorship, but rather how much censorship is too much, or too little.

The internet is, ultimately, no more or no less than any other messy human project built by multiple, uncoordinated actors. We are merely figuring out how to find a new balance between newfound freedoms and age-old responsibilities. This response tries to add nuance to the eight trends as framed by the discussant, while grouping them into four related themes. The bold text in quotations is text from the framing proposition, which this piece responds to.

Theme 1: Data & Network Effects

“Firstly, and most importantly, we have lost control over our data.

Secondly, the centralization of the internet into significantly large businesses with over a billion users each has led to a reduction in the negotiating ability of individual users.”

We have not lost control over our data; we never had any. If anything, the real trend is that we get more control over our data with every passing day. Today, the number of choices we have to make and their complex, intersecting effects leads to bounded rationality. The problem with consent is not philosophical; it is practical. The choices are overwhelming, but we undeniably have more choice and more protections than we did in the early days of the internet.

Privacy policies and clicking for consent are simple tools that worked when the internet was a much smaller place. The number of digital services we interact with today means that no user can be rationally expected to choose options that best protect their privacy. Even if companies are mandated to disclose their data relationships transparently, most people will not read those disclosures. One can argue that harms-based policing is also unlikely to be very helpful. The threat of being sued is not going to stop companies armed to the teeth with lawyers. We saw Facebook announce an anticipated $5Bn FTC fine, and then saw its stock price go up! Regulations such as GDPR only entrench those who can pay for large compliance teams like the Googles of the world, while making life harder for upstarts.

One hopeful trend is that we see the rise of more privacy-oriented data-sharing technologies such as federated learning. The inventor of the web, Tim Berners-Lee, is building personal data-stores with agents that negotiate data sharing with applications. We see cryptography-based decentralised alternatives to centralised systems. If any of these systems go mainstream, it would mean a world of more control over more of our data! I believe that our ability to granularly control our data will only increase, and we are going to have automated or human intermediaries who will negotiate these complex choices for us.

The unionization of workers has been under threat even before the internet. There are also examples of how employees have come together politically in ways that they could not have done as effectively before the internet. Take the example of the Google walkout, where tens of thousands of employees across the world were able to organise a protest that resulted in a change in policies. Though this is admittedly a rare event, the overall story does look like the fortunes of many internet corporations are built by stepping on the backs of vulnerable workers.

Algorithms do make for lousy bosses. The gig economy works off of two clever arbitrages. It is in fixing these arbitrages that we have the hope of protecting individual rights. The first is a regulatory arbitrage: by positioning themselves as mere marketplaces, these organizations can shirk many responsibilities like benefits and a minimum wage, not afforded to their analogue counterparts. This claim of being “merely a marketplace” is bewildering because they typically control pricing and incentives, unlike a real marketplace like the stock exchange which would allow for both parties to discover a price that works mutually.

The second is a risk capital arbitrage: the private market capital afforded to a “tech startup” to enter traditional markets comes at a massive discount compared to those typically available to public market funded analogue competitors. For example, WeWork’s astronomical valuation for being a rent middleman is unavailable to any other real estate company.

This arbitrage is the golden tap: this is the real fuel that feeds blitzscaling and reduces the negotiating ability of workers in a marketplace model. This golden tap allows these companies to build a honeytrap for their workers with high incentives that disappear once they are sufficiently entrenched. Promises of quick and easy money have always lured humans. False promotion of incentives should be punished as false advertising.

These companies lured users to join by simultaneously selling services cheap while paying more to their creators. Individual rights are under risk from this completely legal behaviour. This dumping of capital needs regulation from an antitrust lens that breaks away from the Bork school of thought. Is this risk capital arbitrage crowding out competitors that would create a fairer (albeit costlier) choice? Many would claim yes, but the legal tools available do not allow us to control this behaviour.

Theme 2: Data as an Asset of the State.

“Fourth, data is being viewed as a factor of production and a national asset, as opposed to an individual right. Governments are profiling citizens.

Sixth, individuals are being attacked and weaponized.”

Data is a factor of production! That does not legitimize profiling citizens or peering into their personal lives. However, think about sufficiently anonymised data sets. Merely counting the number of passengers passing through an airport at all times can help understand the economic outlook for a region. The GSTN dataset, stripped of all business identifiers while retaining HSN codes can help policymakers understand what parts of the economy are under distress. Whom does this data belong too? Whom does it hurt if released stripped of identifiers?

It is misleading to say that pitted against statutory power, individuals have no choice to part with their data. None of the recent developments has changed what data one had to share with the state to avail a service. If one is applying for a loan, the data stored about them in a public credit registry (PCR) is not different from what existing credit bureaus already have. The creation of the PCR makes it affordable to serve the excluded.

The use of unique identifiers like digital national IDs may help consolidate disparate records, but it does not mean that consolidation is impossible without them. Real names, birthdates and other details also lead to a fairly accurate match. Simultaneously, giving the same unique identifier in multiple databases, does not automatically imply consolidation. We provide our phone numbers in many, many more databases than we provide our Aadhaar number. The consolidation of large datasets, without the user’s knowledge or consent, should be illegal. The problem is not the choice of the identifier; the problem is the intent of those combining these datasets, and those acquiescing to it.

India has taken a bold step, in providing tokenization by default, and virtualisation of the identifier in the Aadhaar. If one could, the safest thing to do would be to share the tokenised ID from Aadhaar, and not our phone number which is stored in plain text. Apple --whose primary business model is hardware, not advertising -- has recently added this feature to its repertoire with “Sign in with Apple”. Fundamentally, this comes down to whom the user trusts: Big Tech or democratic institutions.

Theme 3: Internet and Free Speech

“Sixth, individuals are being attacked and weaponized.”

“Thirdly, free speech highly dependent on a few platforms, and is being restricted by hecklers getting the veto”

“Seven, speech is being restricted via internet shutdowns, content regulation and online mobs.”

In 2017, the Nobel prize in economics went to Richard Thaler for effectively manipulating people into doing things better for them in the long run. An example was discovering people’s laziness in opting-out of retirement savings instead of opting-in. The internet has long known this trick of manipulating human tendencies, or “bugs” in the “rational actor” model. However, this knowledge is not necessarily always used for good.

There is truth to the fact that the internet allows any motivated individuals or organizations to spread their message far and wide, and precisely to the kind of people who would agree with it. Alternatively, the same tools could be used to target and overwhelm those who would disagree. This is a feature of the internet – it is not a bug.

As described by the framing proposition this is an instance of how people “learn through the free-flow of information and societal interactions at a global scale, without prescriptive local restrictions.” That does not make it automatically acceptable in all scenarios, but it does reinforce a point from the opening argument.

An internet with absolute protection for free speech is not necessarily a great objective to have. It can cause as many harms as it prevents. When audiences are manipulated to promote generally accepted pro-social values, it is considered a nudge. When used to aggressively market to the individual, or promote opposing political views, it is weaponizing the individual. It is hard to tell them apart, especially when those “weaponizing” maybe equally earnest in their intention as those “nudging”.

Defining the boundaries of free speech is not a new problem. Platforms such as Facebook adopted real-name policies precisely to moderate and increase the quality of conversation, compared to the vitriol that spewed on darker corners of the internet. Censorship is a classic case of “damned if you do, damned if you don’t”. Our real problem is not censorship, but who has the authority to censor and how fair are they.

The government, which has classically held that authority is now wrestling with platforms to have that power once again. Usually, the attitude to government censorship in these large companies is hostile. Where governments cannot participate in that decision, they choose more forceful tools like internet shutdowns. The hopeful trend is to relegate censorship to ombudsmen that we can all agree are diverse, representative and fair. Both Facebook and Google have shown precedent in moving towards such a model, and it is easy to see why.

Theme 4: Use of technology for surveillance

“Fifth, the usage of biometrics and facial recognition systems for authentication is becoming shockingly popular.”

“Eight, surveillance is the new normal.”

In the Indian context, biometrics are shockingly popular because shockingly, many people still cannot read or write. Ideas like passwords and OTPS are harder than merely showing up and scanning a fingerprint. In and of themselves, the use of biometrics for authentication is not bad or evil. Many willingly use fingerprint scanners on smartphones for their convenience. The fingerprints are locally stored and never shared.

What makes the use of biometrics dangerous are using biometrics for identification (one-to-many match) and lack of a second (or more) factor(s). One-to-one matching of biometrics such as fingerprints or face with a previously recorded image does not compromise privacy by itself. However, if there is potential to use the same data for one-to-many matching, the system will soon become a tool for surveillance. This is a trickier problem but is not entirely impossible to manage.

Most solutions involve trusting an intermediary, and it is at this point that an important question comes up: who do you trust to save your biometrics but not abuse the information? Some prefer their government, some prefer private players, and some prefer not having to share it with anyone at all.

The lack of a second (or more) factor is much easier to control. Anyone who is designing a system that uses biometrics should realise that biometrics can be compromised (such as cloning of fingerprints) and build adequate safeguards and fallbacks. Almost all new biometric scanners have liveness detection and other protections.

We see a more hopeful trend. The quick adoption of biometrics also means that people are now learning about potential ways in which biometrics fail and how they can be secured. Just like in the early days of databases, most were vulnerable to SQL injection attacks. This did not mean we needed to eliminate databases; instead, we fixed how that attack could be carried out and educated developers.

Should we phase out fingerprints in favour of something harder to clone like vein prints? The answer seems like an obvious yes, but the problem is simply affordability. The point of biometrics in ID systems was too provide a low-cost method of authentication. One can argue that technology X(or Y or Z) is safer, but to make authentication useful, one has to make it affordable as well.

The temptation for surveillance will always remain a corrupting force. A government intent on surveillance, with a convincing narrative about a terrifying enemy, backed by a jingoist majority, is incredibly hard to stop. The only known effective technology system to control surveillance is something like Estonia’s X-Road. X-Road allows citizens to see how their records were accessed and by whom. To do this requires, ironically, combining user data with a unique identifier which some argue makes surveillance easier. That is a tricky choice for anyone to make.

This last point dovetails into my final argument: policymakers and corporations face ridiculously hard trade-offs. It is easy for anyone with no skin in the game to criticise any idea in a vacuum and call it a worrying trend, when one does not have to make that decision. The nature of news is that we ignore ongoing, mundane tragedies to worry about future, possible tragedies bought on by new developments. What rarely gets talked about is the cost of doing nothing. The cost of doing nothing in a country like India is that hundreds of millions of people continue to remain poor, uncounted, and outside of the progress narrative. The internet is a powerful tool in empowering those individuals. They deserve the opportunities it creates for them. While we need to be cautious in our progress, let us not rob hundreds of millions of a better life now, because of the extreme apprehensions of a few.

This essay originally appeared in Digital Debates — CyFy Journal 2019.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.


Tanuj Bhojwani

Tanuj Bhojwani

Tanuj Bhojwani is a Fellow at iSPIRT Foundation. Most recently he helped thegovernment formulate and implement Digital Sky a real-time low altitudenotification and authorization platform ...

Read More +