Author : Trisha Ray

Expert Speak Digital Frontiers
Published on Jan 14, 2021

The decisions social media platforms take about the content and users they host can affect livelihoods, political processes, and major public policy decisions in jurisdictions far removed from the ones they are legally headquartered in.

Atlas Shrugged: The platform v. the state

After an eventful four years, marked by a prodigious list of typos, nuclear sabre-rattling, racially-charged retweets, and finally, egging on an insurrection, user @realDonaldTrump was permanently suspended from Twitter. There is no denying that outgoing President Trump’s incitement of the Capitol Hill riots crossed a bright red line, and several platforms — including Reddit, Facebook, Instagram, Snapchat, YouTube and TikTok — have cracked down on far-right accounts and hashtags, fearing further violence. Trump’s de-platforming has, however, triggered a fresh round of debate on the power of platforms, including how and when they choose to apply their terms of service.

The immediate question triggered in the wake of an account suspension is around how a platform arrives upon such a decision.

How do platforms make account suspension decisions? How consistently are their parameters applied?

In a blog post on their decision to suspend @realdonaldtrump, Twitter stated the rationale behind their decision was continued violation of their ‘glorification of violence’ policy as well as concerns regarding the peaceful transition of power at the 20 January presidential inauguration. Facebook CEO Zuckerberg’s statement outlined similar concerns.

The Trump ban is notable, as it is the first permanent ban of a sitting world leader,<1> but is in no way emblematic of the average user’s experience.

First, Twitter and Facebook enforce their policies slightly differently when it comes to world leaders, based on a “public interest” exception. Content that violates Twitter Rules may be left up if there is a public interest in doing so. Facebook too has in place similar exceptions to its Community Standards in the name of public interest and “newsworthiness.” What “public interest” entails is somewhat nebulous. The speech of regular users does not, of course, have the same protections. Twitter and Facebook users can appeal a suspension, a process that can take anywhere between 24 hours to a few months, sometimes more, according to these Reddit threads of disgruntled users.

What “public interest” entails is somewhat nebulous.

Second, the imperatives of these American platforms are heavily driven by American political processes and ethical frameworks; the 2016 US elections, and then the 2020 US elections have heavily informed policy changes by these platforms. They have, for example, engaged in concerted efforts to take down US far-right and left-wing accounts for propagating violence or for perceived platform manipulation, and their suspension of Trump was in response to pressure from its own US employees and users. Are they similarly responsive to other geographies?

The Trump Ban was met with mixed reactions around the world. While many applauded the decision, a portion of Twitter feared the same fate would befall other leaders. The general sentiment in the latter category was, “What is stopping platforms from de-platforming other world leaders?”

BJP IT Department Head, Amit Malviya, reacts to the Trump ban

It may be premature to assume that Twitter will de-platform leaders for minor offences, especially when their lack of action on objectionable tweets from leaders has been a common criticism. The argument does, however, bring up an important question: Should platforms be more responsive to governments outside the United States? Or, in other words, should social media platforms be above the laws of other countries?

The way platforms tackle this question appears to be driven by the nature of a given regime. Twitter has proactively been documenting state-backed information operations since 2018 — a policy change triggered by Russian interference in US elections. Since 2019, it has expanded from interference in political processes to broader ‘inauthentic’ behaviour, including positive and negative propaganda.<2> Most recently, Twitter condemned the Ugandan government’s internet shutdown ahead of its election.

Should social media platforms be above the laws of other countries?

All this means, of course, that Team Twitter is making sovereign decisions outside of the legal and political processes of a country. Some users’ frustration with this fact has resulted in calls for domestic alternatives to US-based platforms. India’s “Swadeshi social network,” Tooter, and “made in India” Mitron have claimed to be such platforms, and their short, murky histories are apt examples of the challenges that come with these “alternatives”: Weak content moderation policies (making for a toxic user experience), rampant disinformation, and poor user privacy and security practices.

US-based platforms are accountable to the US Constitution and legal system. Free speech rights under the First Amendment are some of the broadest in the world and the Supreme Court has ruled that legal restrictions must be “narrowly tailored.” The Supreme Court, for instance, successively struck down laws designed to protect children from online pornography, on the grounds that they were not the least restrictive method of achieving the set goal. Free speech rights around the world are tailored to a given country’s unique social context and history, and there are significant variations even amongst democracies. South Korea’s 공직선거법(Public Official Election Act 1994) contains strict guidelines on online election coverage and prohibits content — including from sitting public officials — that may sway the outcome of an election. The Supreme Court of South Korea also considers a range of offensive or misleading speech acts during an election on social media a crime. Under the German criminal code, approval, denial or downplaying of Nazi rule (Section 130 (3)) is a punishable offence.

At the same time, for many users, these platforms are their only space for political speech that is critical of their home governments. Citizens of the Philippines took to Twitter in droves in July 2020, using the hashtag #junterrorbill when the Duterte government passed the controversial Anti-Terrorism Act of 2020. Social media platforms have also played a pivotal role in the Hong Kong pro-democracy protest movement.

All this means, of course, that Team Twitter is making sovereign decisions outside of the legal and political processes of a country.

Social media is deeply interwoven into the daily lives of millions of people around the world. Governments rely on social media to interact with their citizens and convey policies, and in the conduct of diplomacy. The decisions these platforms take about the content and users they host can affect livelihoods, political processes, and major public policy decisions in jurisdictions far removed from the ones they are legally headquartered in.

The tussle between the user, the state and the platform brings us to the final and the most important question:

How can platforms be more accountable and transparent? Should their boardrooms be elected, given that they hold as much power as governments?

Governments have the heavy hammer of the law, but users are left in the lurch, which is precisely why platforms need to involve users in their decision-making. At present, regular users can either accept policy changes made without their input or leave the platform, the latter hardly being a feasible choice for many. Companies respond only when they can be held accountable by stakeholders, and for users to become true stakeholders, they need to be able to influence the decisions made by executives at big tech HQs.


<1> The first sitting world leader to be suspended, albeit temporarily, was Iran’s Supreme Leader Ali Khameini.

https://www.buzzfeednews.com/article/ryanmac/twitter-removes-tweet-reportedly-from-irans-supreme-leader

<2> Positive propaganda is used to generate a positive view about a given cause, phenomenon or regime, whereas negative propaganda does the opposite.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Trisha Ray

Trisha Ray

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center. Her research interests lie in geopolitical and security trends in ...

Read More +