On Thursday, May 28, US President Donald Trump signed an executive order calling for the narrowing of legal immunity enjoyed by online platforms like Twitter, Facebook and YouTube with respect to liability arising from user-generated content. Calling such platforms 21st century public squares that are key enablers of free and open debate, the order expressed concern over several instances of “selective censorship” conducted by “politically biased” corporations, and instructed US regulators to take action, including by proposing federal legislation to clarify the scope of immunity and to protect free expression online.
This is a significant development in global Internet jurisprudence as it is expected to reanimate some key questions surrounding platform accountability, and warrants a closer examination of its implications for the digital world.
Safe harbour protection for online platforms
At the heart of the debate lies Section 230 of the Communications Decency Act of 1996, whereby online platforms are broadly protected from being held liable on account of the content posted by their users. Platforms are also prevented under the provision from incurring civil liability for censoring offensive content, whether or not the content is constitutionally protected. In other words, Section 230 allows American platform providers to offer their services without being made to answer for illegal content posted by users, and makes them responsible for developing internal mechanisms to keep their platforms free from offensive material within the boundaries of US law.
The “safe harbour” protection offered by Section 230 is believed to have played a crucial role in allowing the Internet to become a forum for open discourse over the past years. In the absence of safe harbour protection, platforms would have been unable to operate at scale without mass censorship and vetting (highly impractical), retarding their growth and proliferation. Section 230 served as a way to address these concerns, treating platforms as mere intermediaries between content creators and consumers, with limited content moderation functions on their own terms.
This approach gained favour amongst regulators in other jurisdictions as well, prompting many of them to adopt similar legislations to allow online platforms to flourish. Articles 12 to 15 of the European e-Commerce Directive, for instance, exempts platform providers from a general obligation to monitor content or verify its legality. In India, Section 79 of the Information Technology Act, 2000 exempts platform providers from being held liable for any third-party content, so long as certain baseline conditions are satisfied.
However, the “moderation role” that many platforms are increasingly dispensing, is now becoming the bone of contention in the US and in other geographies.
The winds of change in platform accountability
Though strong safe harbour protection for online platforms was considered a good practice in Internet regulation for many years, we are now witnessing a gradual shift in this narrative, driven largely by the evolving roles played by platforms in contemporary societies. Gone are the days when social media websites were used primarily by small clusters of individuals to keep abreast of one another’s lives. Instead, they have matured into powerful, omnipresent vehicles of public debate that are as valuable to governments and institutions as they are to individuals, businesses and communities. Billions of users now turn to these platforms each day to engage with the world around them in all manner of ways, ranging from trading in news to participating in democratic processes. It is no stretch then to say that the norms and frameworks used by platform providers to shape conversations on their networks also shape larger public discourses. And this creates a new dynamic.
Cognisant of the enhanced relevance of online platforms in modern society, governments around the world have begun to seek greater control over the form and substance of digital information flows, including by holding platform providers to higher standards of accountability with respect to the content they host. China and Russia represent one end of the spectrum on this front, both having introduced numerous laws and regulations over the years imposing the strictest of legal sanctions on platforms that allow national and public interests to be undermined in any way. India’s draft amendments to its intermediary liability laws, while more modest than the aforementioned, demonstrate a clear intent to tighten platform accountability through expedited content take-downs and proactive monitoring of networks among other things. Calls for elevated accountability can be seen even in western liberal democracies like the European Union, where legislative intervention is being passively explored to tackle issues such as online disinformation.
Seen against this backdrop, President Trump’s executive order to clarify the scope of legal immunity under Section 230, irrespective of its underlying motivations, is just the latest in a series of global developments related to the introduction of stricter accountability frameworks governing online platforms.
Implications of the executive order
Despite its scathing indictment of selective censorship by online platforms and unambiguous call for narrower immunity provisions, President Trump’s order is unlikely to significantly change American law in the short term and will almost certainly fail to withstand judicial scrutiny. As the US Chamber of Commerce pointed out, an executive order cannot be deployed as the vehicle to change federal law, which means the order is at best, a very vocal declaration of intent by the Trump administration. However, this is not to say that it is an insignificant one, for if nothing else, it will add more vigour to such propositions elsewhere in the world by virtue of originating in a jurisdiction that is seen as a legislative trendsetter so to speak.
Furthermore, the order raises several key questions that will now feature more prominently in global policy circles. Should private corporations be allowed to dictate the terms of engagement in public spheres? At what point do the actions of corporations turn into active interference with governance and political functions? Should statements made by world leaders and politicians remain accessible to the public, no matter how factually incorrect or otherwise objectionable they might be? How can nation states prevent domestic discourses from being influenced by external values and considerations? How will grey-flagging of a political statement by a ruling party or opposition leader on Twitter during animated elections in any country go down? Add to that the fact that these platforms are essentially foreign corporations (in all but one or two countries) moderating local political content.
What transpires in the Trump Versus Jack saga over the US election cycle will be as interesting as what other jurisdictions do with their existing efforts to tame the “platform nation” that resides within national boundaries but speaks free of territorial encumbrances.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.