Expert Speak Digital Frontiers
Published on Nov 03, 2018
Beyond Intermediary Liability: Platform Responsibility for Harmful Speech in India

Introduction

On 21 August 2018, the CEO of WhatsApp was asked, in a meeting with the Minister for Information Technology of the Government of India, to “comply with Indian laws,” “set up a local office” and ensure that illegal messages were traceable for law-enforcement purposes. WhatsApp acceded to the first two demands but rejected the third, arguing that it would compromise the privacy of its users.<1> The Government of India, however, continues to seek “technical solutions” to WhatsApp’s traceability problem, going so far as to threaten possible legal action against the company.<2>< style="color: #000000"> The meeting had followed claims that WhatsApp’s ‘group’ feature, widely popular in India, facilitated the wide dissemination of “fake news” or disinformation.

The exchange between the Government of India and WhatsApp highlights the tensions between the freedoms and responsibilities of social-media platforms in an era of a platform-dependent internet. WhatsApp is far from the only entity facing these concerns. Facebook is also under intense scrutiny in India and elsewhere, following allegations of misuse of personal data by the data analysis firm Cambridge Analytica.<3> Twitter has faced public pressure for greater efforts to counter hate speech. YouTube, Reddit and other major online platforms have faced similar charges and concerns.

Social-media platforms have emerged as a subject of regulatory concerns, given the manner in which their technological design and governance can influence critical aspects of an online society, including freedom of expression and privacy. The centrality of the role of platforms in the governance of online communities has spurred many governments to push for greater regulation of these entities.<4> While many of these concerns are valid, such platforms are also crucial enablers of online communication and innovation, and defective regulation can threaten the vibrancy and freedom of the online public space. The regulatory efforts to reduce the spread of disinformation, ‘fake news’ or hate speech must therefore be designed keeping in mind the role these platforms play in contemporary society.

The issues of platform liability and responsibility arise in several interconnected spheres, including civil liability for copyrights as well as privacy and data protection. However, this paper focuses on harmful or ‘illegal’ speech and conduct on platforms. The paper discusses the peculiar position that the new platforms are placed in and the concerns they pose in the online public space. Further, it discusses the inadequacies of the legal framework in addressing these concerns and outlines possible regulatory approaches.

How Platforms Govern Speech

While the core feature of the internet has been its decentralised nature—permitting peer-to-peer interaction at the ends of the network—the intermediaries that facilitate this access have always been a necessary part of it.<5> Although the nature of and role played by every intermediary varies, they are central to enabling multiple parties to connect and share information online, instead of providing information or content of their own. Some intermediaries provide the physical infrastructure for network access (e.g. ISPs or hardware manufacturers), while others enable the use of online applications for building networks among the users. The latter form of intermediaries, which has gradually evolved to become a crucial arbiter of online communication at an unprecedented scale, is the focus of this paper.

The term “platform” has gained significant use among technologists and policymakers and is now increasingly being used in discussions about laws and policies. Yet, there is no clear definition of the term. According to one author, this semantic confusion is a deliberate attempt by the creators and owners of these technologies to position themselves as undiscriminating carriers of information between different groups of users.<6> While this characterisation may have been somewhat true of earlier applications and networks (proto-social networks such as online bulletin boards or IM apps involving limited members), the organisation of content has become the primary value addition by these platforms as the breadth of user-generated information and communication has grown.<7> Social-media networks such as Facebook or Twitter, online content platforms such as YouTube, and some messaging platforms such as WhatsApp, are now responsible for organising, indexing, placing and moderating data in a manner that enables modern social networking and communication. It is this role of organising and moderating content that places such platforms on a significantly different footing compared to other intermediaries, since the design of these technologies and their private ordering greatly impacts how online communities are formed and communication takes place.<8> This governance comprises formal rules, e.g. codes of conduct and community guidelines or terms of services, as well as less transparent rules that are encoded in the design and software of the platforms.<9> The manner in which platforms that govern speech become the default laws of the online world—regulating social, political and cultural values—usurp the role played by laws in this regard. This is governance by platforms, a task not performed merely incidentally but one that forms the very crux of the value provided by platforms.<10>

The governance of speech and expression is a politically fraught and challenging exercise. For large information platforms, two crucial problems arise. First, speech is heavily context-dependent as the social and legal rules regarding ‘permissible’ expression vary significantly across countries and communities. Social media makes the boundaries between communities unclear and, consequently, poses a major challenge to speech regulation. Since speech is embedded in a context that depends on the communities it is produced by and for, ignoring it has often been problematic. For example, Facebook’s child pornography filter took down a historically important photograph of a victim of the Vietnam War,<11> and YouTube has taken down videos of police brutality against protestors in Egypt as “violent content.”<12>

Secondly, the scale at which these platforms operate makes effective ex-ante or ex-post regulation a huge challenge. Facebook has more than two billion users for an employee base of around 25,000.<13> Twitter hosts close to 300 million tweets a day.<14> As a Twitter executive mentions, effectively governing even 99.99 percent of tweets still leaves 10,000 potentially problematic tweets that can cause significant harm.<15>

Yet, for all the difficulties in governing online speech, there is a consistent demand for ‘better’ regulation through increased governance by the platforms. There are instrumental as well as inherent reasons for putting the responsibility on platforms. In economic terms, platforms are a least-cost actor for regulating speech, given that they are usually in a position to exercise immediate control over illegal speech, including identification of the speakers, policing (removing or restricting) problematic expressions and responding to persons who may be affected by such speech. More importantly, the rules (both formal and encoded) by which these platforms organise and disseminate user-generated content are not neutral, since they are designed to promote the economic end-goals of the platforms. As one scholar notes, design choices implemented by these platforms ensure that they are made for the ‘ideal’ user, not taking into account the disproportionate impact of these choices on the marginalised users against whom such technology can be abused.<16> Therefore, Facebook’s unofficial motto of ‘connecting people at all costs’ feeds into its reluctance to adequately police the weaponisation of its technology for political or social manipulation. This is evident in the recent concerns over the Cambridge Analytica scandal. Similarly, Twitter’s approach to online abuse and hate speech prioritises ‘virality’ of content and freedom and ease of communication over protection of vulnerable users who may be victims of abuse.<17>

Modern social-media platforms constitute and shape public discourse and impact political, social and cultural values. These platforms have been provided the right to privately order public discourse without the concomitant responsibility of ensuring adherence to the norms and values acceptable and accountable to a democratic society. It is, therefore, important to examine the existing models of governance of platforms and the ways to improve them.

How Platforms are Governed

Intermediaries have been a subject of consideration amongst the international legal community and governments, right from the inception of the internet. Consequently, a number of approaches have been developed for governing their responsibilities and liabilities. The Manila Principles on Intermediary Liability typifies these approaches in the following three aspects:

  1. Expansive protections against liability, as in the US where the Communications Decency Act provides blanket exemption from liability for intermediaries as publishers (Section 320).
  2. Conditional immunity from liability, as in India and the EU, where immunity is granted on satisfying certain conditions, e.g. taking down content upon notification by a private user or government.
  3. Primary liability for third-party content, as in China, where intermediaries may be held liable for all content hosted by them.<18>

India brought in the Information Technology Act (IT Act) in 2000 with the primary aim of giving effect to the United Nations’ Model Law on Electronic Commerce. The Act acknowledged the concept and importance of ‘intermediaries’ in the online environment and defined such intermediaries as “any person who on behalf of another person receives, stores or transmits that message or provides any service with respect to that message.” However, unlike the UN model law, the IT Act provided for limited liability of such intermediaries.<19>

A vague definition of intermediaries and the narrow scope of their liability provided little protection for intermediaries from third-party content hosted by them, as the protection reversed the traditional ‘burden of proof’ requirement, wherein the intermediary had to prove that the offence was committed “without his knowledge” or that “he had exercised all due diligence to prevent the commission of such offence.” Additionally, there was no clear definition of what constituted “due diligence.” Finally, the provision only granted immunity for offences under the IT Act and not under the general law (Indian Penal Code) or other special enactments.

This ambiguity in the law finally came to bear on the intermediaries in the Avneesh Bajaj v. State (NCT of Delhi) case,<20> in which a third party had posted on Bazee.com a link to pornographic material concerning minors. The director of the website was charged with obscenity under the IPC. Generally, such liability would have been precluded due to the absence of criminal intention (and knowledge), which is a prerequisite for most offences. However, due to the peculiar interpretation of culpability for the offence of obscenity in India, the case resulted in the conviction of the director of the marketplace.<21> The Delhi High Court held that Bazee.com had failed to exercise due diligence by not filtering pornographic content and failing to adopt a policy to prevent the listing, display or sale of such content on the website.<22> The court’s decision clearly showcases the pitfalls of deeming intermediaries as publishers and making them liable for content. The intermediaries that only host content lack the moral culpability to make them liable. Moreover, empirical studies show that avoidance of liability and minimisation of legal costs to comply with laws lead to over-censorship without regard for user rights.<23> Given the problems of scale and context that plague the governance of speech, imputing direct liability for speech on intermediaries can ring a death-knell for the online communities created through such platforms.

Ultimately, legislators decided to reduce the ambiguity regarding the safe harbours available to intermediaries by amending section 79 as well as the definition of intermediaries under the IT Act in 2008. The scope of intermediaries under the amended Act was clarified and expanded to include every intermediary that provides any service in respect of an ‘electronic record’. Section 79 was also significantly changed to lay down a general rule against liability for the third-party information processed by intermediaries in any manner, subject to the requirements that the intermediary did not initiate the transmission, select the receivers or modify the information. However, intermediaries were to be held liable for the content if they failed to remove it after receiving “actual knowledge” of its illegality. The government also introduced “due diligence guidelines,” which require intermediaries to include specific terms in their contracts with users and allows them to terminate user access for breach of these terms.<24>

Even after the amendment, section 79 continued to impose a heavy burden on intermediaries through the ‘actual knowledge’ requirement, whereby intermediaries were prone to comply with private takedown and censorship requests without any procedural safeguards to protect their users’ freedom of expression. The provisions of the IT Act governing intermediaries were subsequently challenged in the Supreme Court in Shreya Singhal v. Union of India.<25> This was the first instance in which the governance of online speech was constitutionally examined. The Supreme Court upheld intermediary safe harbour under section 79 and strengthened it by requiring a judicial or executive determination of the illegality of content before a platform could be held liable for its non-removal.

As it stands now, the intermediary liability for third-party content under the Indian law is limited to the post-facto removal of illegal content upon receipt of a government or judicial order specifying the illegal content. This standard, however, is problematic, given the difference in the scale and pace at which online speech (and its harms) are propagated and the those at which judicial system operates, and it thus provides insufficient recourse for illegal or harmful speech. As online communication continues to become more integral to our politics and culture, it is throwing up issues that require a re-think on the governance of online expression; disinformation on WhatsApp inciting violence is a visceral example of the need for more effective governance. Other examples include online harassment and violence against women on Twitter, which has continued despite being repeatedly called to attention;<26> and the potential manipulation of Facebook in spreading fake news to target elections, which the owners of the platform are aware of.<27> The present legal liability framework for such platforms does not impose any responsibilities on them for the governance of problematic speech, although it continues to provide the right to police content through their private ordering mechanisms.

Unfortunately, the response of the judiciary and government to the inadequacy of the current legal framework has been to create ad-hoc systems of liability and responsibility for a variety of intermediaries. One extreme reaction has been to curtail internet access for large regions at the slightest prompt. Thus, India has become notorious for having had the most internet shutdowns over the past year.<28>

The Supreme Court has directed search engines to include “auto-blocking” capabilities in certain cases (sex trafficking and pre-natal sex determination), stating (in the context of intermediaries), “It is not acceptable and this needs to be controlled. They (intermediaries) can’t put anything which is against the law of the country.”<29> The Government of India has threatened and cajoled WhatsApp to combat disinformation, with the Ministry of Information Technology noting, “Such platform (sic) cannot evade accountability and responsibility specially when good technological inventions are abused by some miscreants who resort to provocative messages which lead to spread of violence.”<30>

From Intermediary Liability to ‘Platform Responsibility’

The above discussion makes it clear that the private ordering of speech by platforms demands greater regulation than merely framing conditions for their liability for third-party content. Indeed, the discourse surrounding intermediary liability is fast changing into one that demands more from platforms, moving towards what the Internet Governance Forum has termed “platform responsibility.”<31> Although the concept of ‘intermediary liability’ was developed keeping in minds earlier models of internet commerce and networking of limited scale and greater capacity for self-regulation,<32> it is increasingly clear that these concepts are not entirely suitable for large platforms whose principal value addition is in the mechanisms they employ to censor, amplify or otherwise moderate speech, in a manner that greatly impacts the public sphere and online society.

A few regulatory actions have been proposed, and some adopted, to address these concerns. The most prominent perhaps has been Germany’s Network Enforcement law (or the ‘NetzDG’).<33> The law aims to address the problem of ‘manifestly illegal’ content on large social networks with more than two million German users, which, once brought to the notice of these platforms, must be taken down within 24 hours to avoid huge financial penalties.<34> Additionally, the law introduces reporting and transparency obligations upon platforms when responding to restriction requests. Other regulators have attempted to impose ‘informal’ obligations to counter illegal speech, such as the European Commission’s “code of conduct,” which has been voluntarily adopted by social-media platforms.<35> In India, while this option remains unexplored, section 79 of the IT Act provides the possibility of holding platforms liable for content if the court finds their practices amounting to “modification” of content. However, the law is silent on what precisely qualifies as modification of content.

These regulatory options see increased liability as the primary driver of better enforcement of the democratically chosen speech norms. However, they have faced severe and justifiable criticism for enabling greater censorship. More importantly, looking at platform responsibility primarily through a ‘content liability’ framework further disenfranchises the users from having greater control over their online interactions and does not combat the problem of platforms having the power to affect public discourse at scale through private ordering of their formal and encoded rules. Reducing the standards for holding platforms liable for content, particularly without concomitant responsibilities of accountability or transparency, provides greater incentives for the platforms to govern speech according to private norms and systems without taking into account user choices.

Such trade-offs between ‘controlled’ or lawful speech and freedom of expression and privacy are likely inevitable when designing the regulatory systems for governing online expression and ought to be moulded according to the policy choices acceptable to specific communities. However, the lens through which such regulations are approached must to be rethought. Instead of targeting illegal speech per se, the policy choices should be designed taking into account the particular harms arising from the concentration of governing power with large platforms and measures, by which they can be held accountable for the design or governance of their technologies. These harms may be addressed through greater governance of content to dismantle the concentration of power that platforms hold over the online public space.<36> There are sufficient precedents in competition laws to carefully regulate and break up concentration of market power. In some EU countries, the framework of competition law is already being used to address the governance of platforms, such as the German competition regulator’s proceedings against Facebook’s lax data protection practices.<37> In India, the Competition Act of 2002 provides a basic, principles-based framework to address concerns of abuse of market power, but the Competition Commission of India will have to adopt new approaches to assess the concentration of market power and its abuse through or by platforms, such as improving interoperability and data portability standards. Such an approach also expands user choice by permitting product differentiation through content moderation practices and can incentivise platforms, for example, to prioritise users at risk of abuse rather than an ideal user framework. This approach also prevents the imposing of costly regulations designed for large platforms, which can have anticompetitive effects on new entrants in the market.

In addition to leveraging competition law, the law for platforms can also regulate the processes by which platforms control speech. Such regulation should aim at making platforms more accountable and transparent in the processes they employ to govern speech. These measures should aim to ensure that processes are in place for moderating and responding to the content that has been flagged as problematic by a user and explaining the decisions taken to the affected parties. Transparency can be ensured through process audits and reporting obligations to government agencies on the measures adopted for content moderation. Such an audit should explain how platform terms and conditions regarding content moderation have been implemented as well as the algorithmic rules on how and why certain content is promoted or suppressed.

Moreover, the community guidelines for content must be context-specific and moulded to specific communities, instead of being, as they currently are, focused on speech norms formulated in the context of the US. Reactive changes must only be made when the guidelines affect the economic interests of platforms. These guidelines must take into account user experiences, and the process of their development should be clear and subject to scrutiny by public authorities. Platforms should be held accountable for clearing standards and practices to implement the above procedures and must face financial levies and penalties if they fail to implement them.

Conclusion

This paper attempts to provide a description of, and justification for, regulations of platforms that moderate content on a large scale. The starting point for the discussion is the unjustifiable sway that the private ordering practices of platforms have over public discourse and the lack of responsive legal measure to address specific concerns regarding the governing power that such platforms have over the online public space.

The present regulatory framework in India, which focuses on classifying specific intermediaries and holding them liable for the underlying illegal content upon judicial order, is not feasible for content regulation at scale. The response of creating an ad-hoc regulatory framework has largely been without principled justifications and merely increases the concentration of power in the hands of technology platforms and provides greater censorship abilities to the state without any legal backing.

A number of regulatory options can, and have been, explored in response to these concerns. While increased liability for platforms has become a popular rejoinder, it is rather reactionary. There is a wider problem of lack of consumer choice, and lack of accountability and transparency of large platforms and the control they hold over the public sphere. This paper suggests that the regulatory authorities and lawmakers in India must explore ways to dismantle the imbalance of power between platforms and their users. The ideas discussed here are merely the starting points for a much-needed broader discussion on how to regulate the increasing ill-effects of platforms. Governments and companies need to be mindful of preserving the diversity and freedom of the online public space without allowing it to be overtaken by private interests.


Endnotes

<1> Komal Gupta, “WhatsApp, govt at loggerheads over traceability of messages,Livemint, 24 August 2018.

<2> Ibid.

<3> Carole Cadwalladr and Emma Graham-Harrison, “Zuckerberg set up fraudulent scheme to 'weaponise' data, court case alleges,The Guardian, 24 May 2018.

<4> Komal Gupta, “Govt plans regulatory framework for social media, online content: Smriti Irani,Livemint, 19 March 2018.; I&B Ministry sets up committee to regulate online content, Hindustan Times, 6 April 2018.

<5> The Economic and Social Role of Internet Intermediaries, Organisation for Economic Cooperation and Development, April 2010.

<6> Tarleton Gillespie, “The Politics Of ‘Platforms’,” New Media & Society 12, no. 3, 347–64.

<7> Tarleton Gillespie, “Platforms Are Not Intermediaries,” Georgetown Law and Technology Review 2 (2018): 198.

<8> Ibid.

<9> Frank Pasquale, “Platform Neutrality: Enhancing Freedom of Expression in Spheres of Private Power,” Theoretical Inquiries in Law 17, no. 487 (2016).

<10> Tarleton Gillespie, “Governance of and by platforms,” in SAGE Handbook of Social Media, Jean Burgess, Thomas Poell and Alice Marwick.

<11>Facebook Restores Iconic Vietnam War Photo It Censored for Nudity,The New York Times, 9 September 2016.

<12> Jilian York, “A Brief History of YouTube Censorship,Motherboard, 26 March 2018.

<13>Number of active Facebook users increased despite scandals,Al Jazeera, 26 April 2018.

<14> Jim Edwards, “Leaked Twitter API data shows the number of tweets is in serious decline,Business Insider.

<15> Dave Harley, “Content Moderation at Scale Conference”.

<16> Cade, “On Weaponised Design”.

<17> Anupam Chander and Vivek Krishnamurthy, “The Myth of Platform Neutrality,” Georgetown Law and Technology Review 2, no. 400, 2018.

<18> The Manila Principles on Intermediary Liability Background Paper.

<19> Section 79 of the unamended IT Act stated:

“For the removal of doubts, it is hereby declared that no person providing any service as a network service provider shall be liable under this Act, Rules or Regulations made thereunder for any third party information or data made available by him if he proves that the offense or contravention was committed without his knowledge or that he had exercised all due diligence to prevent the commission of such offense or contravention.

Explanation

For the purposes of this section—

  1. a) “Network Service Provider” means an intermediary;
  2. b) “third party information” means any information dealt with by a network service provider in his capacity as an intermediary.”

<20> Avnish Bajaj v. State (NCT of Delhi), 150 (2008) DLT 765.

<21> Chinmayi Arun, “Gatekeeper Liability and Article 19(1)(A) of The Constitution of India,NUJS Law Review 7, no. 73 (2014).

<22> Avnish Bajaj v. State (NCT of Delhi), 150 (2008) DLT 765. While the Supreme Court ultimately overturned the conviction, this was done without reference to section 79 or the perils of intermediary liability for third-party content.

<23> Rishabh Dara, “Intermediary Liability in India: Chilling Effects on Free Expression on the Internet,Centre for Internet and Society, 2011.

<24> Information Technology (Intermediaries Guidelines) Rules, 2011.

<25> Shreya Singhal v. Union of India, (2013) 12 SCC 73

<26> Mariya Salim, “It's time to address online violence against women in India,Al Jazeera, 13 May 2018.

<27> Bharti Jain, “Facebook offers to curb fake news during 2019 Lok Sabha poll,Times of India, 7 July 2018.

<28>Clampdowns and Courage South Asia Press Freedom Report 2017-2018,UNESCO.

<29> Sabu Mathew George v. Union of India & Ors. (WP(C) 341/2008).

<30>WhatsApp warned for abuse of their platform,Press Information Bureau, 3 July 2018.

<31>Dynamic Coalition on Platform Responsibility (DCPR)”.

<32> Gillespie, Supra note 6.

<33> Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act), Germany.

<34> Diana Lee, “Germany’s NetzDG and the Threat to Online Free Speech,Media Freedom and Information Access Clinic, 10 October 2017.

<35>Code of conduct on countering illegal hate speech online,” European Commission.

<36> Gillespie, Supra note 6; Cindy Cohn, “Bad Facts Make Bad Law: How Platform Censorship Has Failed So Far And How To Ensure That The Response To Neo-Nazis Doesn’t Make It Worse,” Georgetown Law and Technology Review 2, no. 432 (2018); Marc Tessier, Judith Herzog and Lofred Madzou, “Regulation at the Age of Online Platform-Based Economy: Accountability, User Empowerment and Responsiveness; Platform Regulations: How Platforms are Regulated and How They Regulate Us,” UN IGF Dynamic Coalition on Platform Responsibility, December 2017.

<37> Douglas Busvine, “Facebook abused dominant position, says German watchdog,Reuters, 19 December 2017.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Divij Joshi

Divij Joshi

Divij Joshi is a lawyer and legal researcher presently affiliated with the Vidhi Centre for Legal Policy Karnataka. His primary research focus is on the ...

Read More +