Originally Published 2021-03-25 11:59:12 Published on Mar 25, 2021
Moderating online content in the United States

The attack on the U.S. Capitol in Washington DC by supporters of former President Donald Trump in January 2021 sparked a flurry of activity aimed at moderating online content in the United States. Trump was banned by social media platforms Twitter, Facebook, Instagram, Pintrest, YouTube, and others. Twitter purged accounts supporting the right-wing conspiracy QAnon. Parler, a growing social media app pitching itself as a free speech alternative to Twitter, was kicked off the Apple and Google apps stores for its lax content moderation standards. Parler was also removed from Amazon’s web-hosting services (AWS), effectively taking the site offline for a month until the company could find alternative webservices. These developments intensified the debate around whether and how companies should be able to remove content and ban users, and what Congress’ role should be in regulating such decisions.

The debate is not just about protecting freedom of speech and expression. Technology and social media have affected society in a variety of ways, influencing the right to information, privacy and data protection, due process, the power of technology companies, and even antitrust and market competition. As the prospect of Big Tech regulation looms, partially motivated by and with impacts on content moderation, it is important for users to understand the context of this debate in the United States, what is at stake, and potential avenues for regulation.

The First Amendment and Section 230

The policy conversation about online content moderation in the United States starts with Section 230 of the Communications Decency Act of 1996 (CDA 230), which exempts platforms from liability for what their users post, while also giving them the ability to set their own rules for moderating content and thus take down speech they deem in violation of those rules. CDA 230 applies to any website or service that publishes material written by a third party—from Twitter and other social media platforms, to online retailers where shoppers can post reviews, to community blogs where neighbors can leave comments. However, CDA 230 does not provide platforms with blanket immunity for all content: it does not exempt platforms from U.S. criminal or copyright laws, and they can be held liable under those laws for content related to sex trafficking, terrorism, and copyright violations.

Since the 2016 election, there have been growing calls from both sides of the aisle to repeal CDA 230. Former President Trump tweeted calls to repeal Section 230 and lawmakers, state attorneys general, political candidates and the Justice Department put forward proposals to reform the law. For those on the right, repealing the law is cast as a way to rein in Big Tech and end the perceived bias against conservative voices. For Democrats, CDA 230 is seen as incentivizing less action from social media platforms to remove extremist content and misinformation, as they cannot be held accountable for failing to do so.

It is important to note what both the Democratic and Republican positions get wrong about the impact of repealing CDA 230. More free speech is not the outcome the “Revoke 230!!!” activists are likely to achieve if they get their way, according to legal experts. By removing the shield of liability, platforms may be likely to remove more speech instead of risking lawsuits, potentially infringing on free expression rights and stifling minority voices.

In addition to CDA 230, the other foundational element of this conversation in the U.S. is the First Amendment. While many conservatives have cried censorship and accused platforms of violating their First Amendment rights as they were banned from various services for promoting false, hateful and violent content, such complaints are a fundamental misrepresentation of the Constitution. The First Amendment establishes that the government cannot abridge users’ right to free speech, effectively preventing Congress from legislating what content platforms must take down (or leave up), but does not set the same restrictions on private companies. Big Tech companies have First Amendment rights as well, and may choose to exercise those rights by removing content or selecting which apps they want—or do not want—to host in their store. Bolstered by CDA 230, platforms may set rules about what speech is permissible on their sites without inherently violating users’ free speech rights.

The Competition-Moderation Nexus

There is an aspect of the content moderation conversation that quickly veers into questions about competition and monopoly. If a user is banned from Twitter or an app is removed from the Apple app store, there are other places that the user or app maker can go to publish their “speech,” even if the audience on an alternative platform is not as large as they would reach on the Big Tech platforms. As such, claiming real censorship is difficult if one can move to another platform.

However, this reasoning relies on there being sufficient competition in the marketplace, both at the level of app stores and platforms. App stores such as Apple and Google have faced increasing scrutiny about their roles as gatekeepers that control access to a majority of users and the market, and are facing lawsuits and the prospect of regulation to prevent them from abusing their power anti-competitively. Questions about whether or not there is enough competition at the platform level have caused some to suggest that sites like Twitter and Facebook have also become gatekeepers and are so large in terms of user base and audience reach that access to their platforms should be guaranteed. Proponents of this view argue that the sites should be treated as public utilities (and thus subject to special types of government regulation), or as common carriers (compelled to carry all content to everyone, similar to telephone companies or SMS messaging).

However, in addition to questioning whether there is sufficient legal basis for such regulation, experts have warned that forcing platforms to be neutral carriers of content is a slippery slope: in the marketplace of ideas, the loudest, most extreme voices tend to win out, especially on platforms curated by algorithms designed to generate clicks. As such, platform neutrality would lead to more problematic content online, not less. Other opponents to the idea of treating platforms like public utilities warn that doing so will only cement the large platforms as monopolies, and instead actions should be taken to promote a more competitive environment.

Beyond the platforms themselves, the tougher questions come further down in the Internet infrastructure. For example, net neutrality was an attempt to regulate Internet Service Providers (ISPs) like Verizon or Comcast by requiring them, as the “pipes” through which online content travels, to carry all sites at the same speeds to everyone. But beyond the ISPs, there are legitimate questions about the actions and neutrality of services like AWS, or CloudFlare, a service that protects sites from hacking, and caused a white supremacist website to be taken offline when it refused to continue serving the site in 2017.

The Challenges of Self-Regulation

There is a general consensus that most people would like to see some level of moderation when it comes to speech online. For example, in the wake of the Christchurch shooting in New Zealand, calls grew for social media platforms to remove the video of the shooter from their sites, even though in the United States it was not unlawful to share or post it. Many  users say they prefer to use social media sites without being harassed with death threats or racial slurs, and there is a broader consensus around taking down toxic content such as revenge porn, pro-eating disorder, or suicide messaging is a net positive for society, although that speech may be technically legal.

At the same time, many people are wary of powerful tech companies setting the rules for what is permissible to say in a democratic society, taking on the role of courts, and possibly rigging the game in someone’s favor. Companies’ efforts to moderate themselves have faced scrutiny for a lack of transparency and inconsistency in the application of moderation policies. They have been accused of depriving users of due process, as procedures for appealing takedown decisions are often unclear or inaccessible. The power of these tech companies to effectively regulate a large portion of public speech can be incompatible with their motivation for profit, which does not always incentivize the protection of the fundamental rights to free speech and expression. Facebook in particular has been heavily criticized for carrying out the censorship objectives of authoritarian governments by taking down posts and banning users at governments’ request in exchange for maintaining access to those countries’ markets. The issue also comes back to market competition: big corporations such as Facebook and Twitter can afford large-scale moderation operations, and could establish a floor for content moderation that smaller, newer platforms cannot meet, thus effectively shutting them out of the market.

A Role for Congress?

If private companies are not the appropriate guardians of the right to free speech and expression in democratic societies, the logical place to turn would be the elected representatives of the people. However, the U.S. government is hemmed in by the First Amendment, unable to make broad laws telling companies what speech to remove or keep, and so options for content regulation legislation that would stand up to a strict constitutional test are limited. This has not stopped politicians and lawmakers from attempting to address the content moderation issue by proposing various reforms to CDA 230. In March 2020, Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) introduced the EARN IT act, which was nominally targeted at combating child sex abuse material, but has been characterized as unconstitutional and an underhanded way to ban encryption, further eroding privacy rights in the U.S. Later in the year, another bipartisan pair of senators, Brian Schatz of Hawaii (D) and John Thune of South Dakota (R) introduced the PACT Act, which proposed requirements for companies to improve transparency of their moderation practices and decisions, and remove content deemed illegal by a court within twenty-four hours. Some have heralded this as a step in the right direction, if still insufficient.

In addition to the numerous other proposals that were made in 2020, 2021 has already seen proposals from Democrats who are now in control of Congress. One such proposition is the SAFE TECH Act, focused on limiting the applicability of CDA 230 in the case of online ads. (Critics are already warning of the bill’s potential unintended and widespread consequences for all online speech based on its broad wording.) While so far there has not been a proposal that has garnered broad support and momentum, we can expect more movement on Big Tech regulation—including content moderation.

End of an Era

Establishing a regulatory framework that helps combat heinous and toxic online content, upholds rights to free expression and access to information, helps foster a digital environment that is safe for all people, and reflects democratic values is no easy task. Tackling this issue should likely involve a variety of legislative and social tools to deal with all aspects at play, from antitrust regulation to a robust privacy and data protection framework.

Furthermore, policymakers and the public alike should be wary of policies and regulations crafted in the aftermath of big controversies or as reactions to loud public outcry, as these will not be the kinds of thoughtful policies needed to ensure the safeguarding of digital rights and democratic principles. However, while this may be a new conversation in Washington, many people have been talking about this issue for some time. Much good work has been done by organizations outlining principles for content moderation. The Trans-Atlantic Working Group developed a framework for platform governance intended to uphold democratic principles and a free internet. Digital rights NGOs and various consortiums have provided guidance for both governments and platforms on rights-based approaches to content moderation, including advocating for transparent reporting on account suspensions and content removal, adequate notice and justification to users when acting on content, and the establishment of an appeals processes. The European Union is currently developing a massive regulatory framework, the Digital Services Act, which offers some possible elements for U.S. policymakers to emulate, and attempts to take a comprehensive approach to regulating “very large” online platforms, for example requiring them to conduct systemic risk assessments of how their services impact social goods such as public health, safety and privacy.  A public consultation launched by the EU on the package resulted in over 300 position papers and thousands of comments that can also provide some insight into how to balance the competing rights, priorities and responsibilities at play in this issue.

It seems that the era of unregulated tech in the United States is coming to an end, and as lawmakers, corporate leaders and users alike start talking about what kind of rules are needed, including rules about speech online, it is important that we understand what can be accomplished if we pursue this thoughtfully—and what is at stake if we do not.


This commentary originally appeared in ORF America.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.