Last month, the Supreme Court heard five hours of oral arguments in two terrorism cases, Gonzalez v. Google and Twitter v. Taamneh. Many fear the outcome of these cases could “break the internet.” Perhaps aware of the far-reaching implications of the wrong outcome in these cases, and the limitations of their technical expertise – as Justice Kagan noted, Supreme Court justices aren’t “the nine greatest experts on the internet” – the Court seemed hesitant to break the internet just yet.
But while the Justices showed reluctance to take steps that could radically change how content curation and content moderation work, they mostly ignored the free speech and user rights considerations that were also up for debate. Yet, the decisions in Gonzalez and Taamneh could potentially have severe implications for free speech online. Should the Supreme Court rule for the plaintiffs, online platforms would effectively be compelled to remove vast amounts of content that is protected under international freedom of expression standards to shield themselves from liability. In addition, platforms would be encouraged to increasingly rely on automated content moderation tools in a manner that is likely to over-restrict speech.
The Cases Have Broad Implications for How Social Media Platforms Moderate Content
The facts and legal questions underlying both cases, which are closely related, have been discussed extensively. In a nutshell, both cases were initiated by families whose relatives were killed in ISIS attacks in Paris and Istanbul. In each case, the plaintiffs argue that the defendant platforms have aided and abetted ISIS and thus violated U.S. antiterrorism statutes.
However, the questions before the Court in each case revolve around the interpretation of different laws. In Taamneh,the question is whether a platform that provides a widely available service that is also used by terrorist entities for propaganda and recruitment can be liable for aiding and abetting international terrorism under the Anti-Terrorism Act(ATA). In Gonzalez, the Court must determine whether Section 230 of the 1996 Communications Decency Act covers recommendation systems. Section 230 grants legal immunity to online platforms for content posted by third parties and allows platforms to remove objectionable content without exposing themselves to liability. The provision has been fundamental in protecting free speech and fostering innovation online.
If the Court interprets aiding and abetting liability in a strict manner and also narrows Section 230 protections for recommender systems, which organize, rank and display third-party content, it could mean that platforms would need to fundamentally change how they operate. Indeed, although the plaintiffs formally seek liability for recommendation systems it is hard to see how siding with the plaintiffs would not result in liability for content posted by users. The plaintiffs did not claim that recommender systems were designed to push ISIS content or that they singled out ISIS content in any way. If the fact that an algorithm has sorted, ranked or prioritized content is sufficient to restrict immunity under Section 230, this would render Section 230 inapplicable to virtually all content on the major platforms. To avoid liability, platforms might either completely abandon the use of recommender systems or – more likely – increase their reliance on automated tools and remove protected speech in a precautionary and likely overbroad manner.
In Gonzalez, the Justices questioned how immunity could be maintained for third-party content if immunity restrictions applied to content-neutral algorithms that recommend the same content. Justice Kagan observed that “every time anybody looks at anything on the Internet, there is an algorithm involved” and that “in trying to separate the content from the choices that are being made, whether it’s by YouTube or anyone else, you can’t present this content without making choices.” Even Justice Thomas, who was previously eager for the Court to review Section 230, questioned why Google should be held liable if it applies its algorithm in a content-neutral way, showing “cooking videos to people who are interested in cooking and ISIS videos to people who are interested in ISIS, [and] racing videos to people who are interested in racing.” As Justice Roberts noted, limiting Section 230 immunity in that way would have consequences well beyond liability under the ATA, exposing platforms to all sorts of legal actions, such as defamation or discrimination claims.
Similarly, in Taamneh, the Justices sought to find some limiting principle or middle ground approach to constrain the scope of the ATA. Many of their questions aimed at understanding when someone could be considered to “knowingly” provide assistance to a terrorist group and what sort of assistance could be considered “substantial”. Justice Gorsuch also observed that there was very little in the plaintiff’s arguments that linked Twitter to the ISIS attack and that it was important to “preven[t] secondary liability from becoming liability for just doing business.” And indeed, plaintiffs’ counsel, Eric Schnapper, insisted that a defendant platform could face ATA liability even if it lacks any knowledge or awareness of a particular attack and did not assist the attack in any way. But Justice Thomas appeared concerned about such a wide interpretation of the ATA when he said “If we’re not pinpointing cause-and-effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abettor in those instances.” At the same time, Justice Kagan seemed to suggest that a platform might be held liable if it did not have any content moderation policy in place or failed to take any actions to remove terrorist content.
The Supreme Court Largely Ignored Threats to Free Speech
While the Justices seemed generally aware of the ramifications of a decision for the future of the internet, they mostly ignored the implications their ruling could have for free speech and user rights. The sheer scale of the content potentially giving rise to liability, which amounts to millions of posts per day, also did not play a major role in the discussion.
A ruling limiting Section 230 protections could mean – in the words of Justice Kavanaugh – that lawsuits against media platforms would become “non-stop.” Platforms would be forced to monitor everything posted on their platforms worldwide, censoring large swaths of content. There is no other conceivable way to avoid liability. It takes little imagination to predict that in such circumstances controversial, shocking, or offensive speech would be generously removed as companies seek to shield themselves from liability and prevent lawsuits, even though these types of speech are protected under international freedom of expression standards. Increased reliance on automated content moderation tools is a further likely consequence, despite their limitations. These tools are unable to make complex assessments of whether speech is illegal or qualifies as “hate speech,” “extremist,” or “terrorist” content – particularly in languages other than English. These tools are also unable to detect nuances, irony, or whether the content is in the public interest, and would likely restrict all sorts of lawful speech.
The Justices also made analogies to other industries during the Taamneh hearing, namely the hypothetical liability of gun dealers, banks, or pharmaceutical companies. But there was no recognition that online platforms are very different. They do not merely offer services to potentially problematic or dangerous actors. They also enable public discourse and expression online and host content generated by hundreds of millions of users in the case of Twitter and billions of users in the case of Google.
Google’s counsel, Lisa Blatt, did specifically point to the amici briefs raising free speech concerns. She argued that if the Supreme Court sided with the plaintiffs, then platforms would either over-moderate and turn into The Truman Showor not moderate at all and turn into a “horror show.” Surprisingly, Twitter’s counsel, Seth Waxman, did not bring up free speech. In a somewhat unfortunate example, he even suggested that platforms could be considered to have the level of knowledge that could give rise to liability if they were notified by law enforcement authorities, such as the Turkish police, of certain posts or accounts but ignored takedown requests. This is a problematic position to take, given that many governments ask for removal of content as a way to control speech and stifle dissent, rather than as a tool to prevent terrorist attacks.
In any case, it should be up to independent and impartial judicial authorities, not the executive, to make decisions on removal of speech in accordance with due process and the international human rights law standards of legality, legitimacy, necessity and proportionality. Complex legal and factual issues should also not be delegated to private actors, including online platforms. Allowing private actors whose motives are primarily economic, and who have the incentive to limit their liability exposure, to make decisions on the legality of users’ speech will inevitably lead to undue restriction of free speech online.
Low thresholds for aiding and abetting liability could also impact freedom of expression beyond questions of liability of internet intermediaries. If the Court were to follow the plaintiffs’ theory of liability, it might also have chilling effects in other areas, such as public interest reporting, as Justice Kavanaugh raised. More concretely, Justice Kavanaugh asked Schnapper whether CNN should have been sued for aiding and abetting the 9/11 attacks by airing an interview with Osama Bin Laden? Schnapper rightly suggested that CNN has a First Amendment right to show the interview. In fact, free speech protections should play a larger role when assessing liability, whether the conduct involves CNN, other news outlets, or online platforms that host the speech of millions.
Will the Supreme Court Defer to Congress for Section 230 Reform?
For those worried that the Supreme Court could upend the legal structure of intermediary liability, the oral arguments are cause for optimism. Instead of demonstrating eagerness to reconsider Section 230, the Justices appeared unsure about how exactly the law should be interpreted, where to draw the line between intermediary conduct and user-generated content, and whether they had the necessary technical expertise to do so. The Court also seemed conscious that the outcome of these cases, in particular Gonzalez, could have serious economic consequences. Justice Kavanaugh cited warnings put forward by amici that a decision narrowing Section 230’s immunity could “crash the digital economy.”
Some Justices contended that the Supreme Court may not be the best venue to decide whether and how Section 230 should be reformed as they are not equipped to account for the consequences their decision might entail. Justice Kagan asked whether reforming Section 230 is “something for Congress to do, not the Court?” She joins several amici who argued that any changes to Section 230 or the broader regulatory framework should come from Congress. And indeed, as ARTICLE 19 and others have argued, the complexity of policymaking and lawmaking in this area, which affects the human rights of billions of users, in the United States and beyond, “requires careful legislative fact-finding and drafting” that is not amenable to judicial decision-making.
Many governments around the world are grappling with the question of how to regulate major platforms to prevent the amplification of radical, hateful, or extremist content online. In the United States, regulations of online platforms and recommender systems might take the form of changes to Section 230, but they can also occur through other means. Regardless of whether it is the Supreme Court or Congress, any institution considering and reviewing platform regulation needs to ensure that human rights lie at the heart of their considerations. This means that the principles of legality, legitimacy, necessity and proportionality must be applied throughout. Any framework that imposes limitations on free expression must be grounded in robust evidence and prioritize the least censorial and restrictive measures to address online harms.
Instead of asking platforms to exercise even more powers over our speech by screening and assessing all user-generated content, regulators should focus on less intrusive methods that are specifically tailored to tackling some of the negative effects of the platforms’ recommendation systems. For example, regulatory solutions should require companies to be more transparent towards regulators, researchers and users about how their recommendation systems work, set clear limits on the amount of user data that platforms are allowed to collect, and mandate the performance of human rights due diligence. They should also address the dominant position of the biggest online platforms through regulatory tools that would increase competition in the market and enhance users’ choice over what content they get to see online. Some of these regulatory solutions were adopted in the European Union last year with the EU Digital Services Act and the Digital Markets Act. While these regulations could have been more ambitious in protecting human rights online (for example by establishing an explicit right for users to encryption and anonymity), they do correctly focus on rebalancing digital markets and regulating the content moderation and curation systems applied by online platforms rather than mandating the restriction of undesirable types of users’ speech.
These policy considerations go beyond what the Justices will consider in Gonzalez and Taamneh. Should the Justices decide to reinterpret Section 230 in any way, at the very least, they will have to carefully consider the impact of any limitations to Section 230 on freedom of expression online and whether such limitations can be compatible with Section 230’s underlying purpose and the online ecosystem it has created.