Statue of Liberty holding torch with text bubbles instead of flame. Chatbot technologies in daily life, freedom of speech concept vector illustration. Vector illustration.

Curating Cyberspace: Rights, Responsibilities, and Opportunities

Editor’s Note

This article is part of Regulating Social Media Platforms: Government, Speech, and the Law, a symposium organized by Just Security, the NYU Stern Center for Business and Human Rights, and Tech Policy Press.

Free speech sells. In the United States, the view that tech platforms are outposts of free speech and democracy has been leveraged to grant the industry sweeping immunity from liability under Section 230 of the Communications Decency Act. This interpretation of the law preemptively absolves tech platforms from responsibility for the choices they make about third-party content – both the choices about what not to allow and what to allow. But while the right of private entities to exclude the speech of others as they see fit is an essential and long-recognized aspect of the First Amendment freedom of speech and association, the decision to repeat, promote, or encourage other people’s speech has generally been understood as a responsibility. A newspaper is, of course, not liable for the op-eds it chooses not to publish, but it does face the potential for liability for those it does. The two choices that speech intermediaries can take – exclusion or inclusion – are not symmetrical.

The argument that online intermediaries deserve special privileges that other intermediaries do not relies heavily on the popular fiction of the “digital public square.” From John Perry Barlow’s 1996  declaration that cyberspace is “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity” to the Supreme Court’s 2017 encomium to the “vast democratic forums of the Internet” and to Elon Musk’s 2023 assertion that “X is the global town square,” techno-libertarian rhetoric has justified the government’s selectively laissez-faire approach to the regulation of tech platforms, and tech platforms’ selectively laissez-faire regulation of themselves.

Section 230 has been interpreted to absolve online platforms of liability for a vast range of harms, including sexual exploitation, targeted harassment campaigns, coordinated terrorist activities, illegal firearms sales, election lies, deadly health misinformation, and dehumanizing propaganda – even when the platforms are fully aware of these harms and even when they profit from them – in the name of free speech. Defenders of this status quo maintain that this is a necessary price to pay to safeguard the internet’s unique role in defending democracy from authoritarian assault.

But as has been made painfully clear in our current historical moment, not only has the “digital public square” failed to meaningfully challenge the rise of authoritarianism in the United States, it has accelerated it. Continued warnings about the dystopian future that awaits us if the tech industry’s sweeping immunity is curtailed ignore the all too obvious reality that this dystopian future has already arrived. Tech industry leaders were given every resource, every opportunity, every form of special treatment to make good on their promise of creating a digital public square for the democratic exchange of ideas and the unfettered exercise of free speech.

What they did instead was create a massive data-mining operation that optimizes extremist and exploitative content in the ruthless pursuit of profit. Rather than serving as a bulwark against authoritarianism, the tech industry has become its handmaiden. It is dominated by a handful of massively influential tech platforms that claim to be neutral but have prioritized reactionary conservative content for years; search engines that systematically boost false information favoring conservative extremism; and a bevy of conservative social media forums, including one owned by President Donald Trump himself, that consistently push out far-right, extremist, neo-Confederate content.

If there is going to be an effective online resistance to the fascist takeover of the United States, it will not be through the pretense of the “digital public square” that disguises corporate greed as the will of the people. It will not be through the moral hazard created by allowing online platforms to benefit from promoting harmful content while suffering none of its consequences. And it will not be through the misguided and unconstitutional attempt to impose liability for the choices platforms make to exclude, criticize, or ignore speech. Instead, it will be through platforms embracing a role that balances expressive rights and responsibilities in a principled and pro-democratic way: the role of curator.

Masks Off: Neo-Confederates in the Digital Public Square

The rosy myth of the digital public square open to all is refuted by the public internet’s earliest history. White supremacists were among the first internet adopters, quickly leveraging the internet’s decentralized and anonymized communication structure to share propaganda, attract more members, and plan violent attacks. The strategies of doxing, trolling, conspiracy theories, and memes used by far-right extremists today were developed on white supremacist online bulletin boards in the 1980s and 90s. As the journalist Adam Clark Estes has noted, “You can draw a line from the first neo-Nazi online bulletin boards to the online hate forum Stormfront in the ’90s to the alt-right movement that helped Donald Trump rise to power in 2016.”

In the nearly thirty years since Congress passed Section 230, power in the online marketplace of ideas has become concentrated in the hands of a tiny number of multi-billion dollar corporations, due in no small part to the enormous financial benefits that flow from being shielded from liability for almost any unlawful content and conduct they might facilitate (a luxury not afforded to newspapers, book publishers and distributors, television stations, universities, or individuals). And although many of the major tech players eventually developed some modest policies and practices to remove or respond to harmful and illegal content (often with the uncompensated help of advocates and outside experts), the most powerful of these companies have retreated dramatically from content moderation policies aimed at curbing misinformation, sexual exploitation, and violent white male supremacy.

After Elon Musk bought Twitter in 2022, he disbanded its Trust and Safety Council (although not before falsely claiming, on Twitter, that these uncompensated external advisers with no decision-making authority over Twitter’s content-moderation decisions were responsible for the continued presence of child sexual exploitation material on the platform). Musk reinstated Trump’s previously suspended X account, along with those of a number of neo-Nazis, child pornographers, and sexual predators, while banning or restricting accounts critical of him. Musk altered X’s algorithm to promote his own content over users such as then-President Joe Biden and routinely used the platform to threaten and harass critics and advertisers who had chosen to no longer do business with X.

After Trump took office in January and declared Musk to be a “special government employee,” Musk has also used the platform to threaten his critics with government-enforced civil and criminal investigation and litigation. He has not only allowed but also personally promoted threats by government officials, including federal prosecutors, against private individuals, journalists, and legislators who have criticized or merely described actions taken by him and his “DOGE” associates.

X and other platforms have for years been adjusting their algorithms and policies to amplify extremist far-right and misogynist content (including conspiracy theories, revenge porn and deepfakes, election misinformation, neo-Nazi and other white supremacist propaganda) while at the same time removing content disfavored by conservatives (information about birth control and abortion; pro-Palestinian speech, etc.). Meta founder and CEO Mark Zuckerberg not only reinstated Trump’s account in 2023 but quickly promised, after he won the 2024 election (and after he threatened Zuckerberg with life in prison), that Meta would be rolling back its content-moderation practices in the name of “free expression.”

Republican leaders have deployed the same tortured and inverted definitions of “free expression” and “censorship” to justify imposing government control over not what platforms keep up, but what they take down. Florida and Texas enacted laws that sought to force platforms to host content against their will; Trump’s picks to head both the Federal Trade Commission and the Federal Communications Commission have accused private tech companies of “censorship” for moderation actions. Republican leaders have been promoting a constitutionally illiterate conspiracy theory known as the “censorship industrial complex” for years, claiming widespread collusion between the Biden administration and social media companies to repress conservative speech even as conservative content has steadily dominated the online information environment.

While the second Trump administration is fond of characterizing its full-scale assault on America’s legal, cultural, and social institutions as “combating DEI,” the more accurate way to describe his political agenda, as several commentators have observed, would be “anticivil rights” or “pro-segregation.” Such terms do a better job of illuminating the sexual and racial resentment at the heart of the Trump administration’s efforts to censor, punish, and defame women and people of color while promoting supremacist beliefs about whiteness, merit, and masculinity. The neo-Confederate nature of Trumpism is evidenced in its nostalgia for pre-Civil War racial patriarchy, including attachment to rigid gender roles, belief in racial superiority, and an unfounded but persistent sense of persecution. Hence the censorship of words and concepts that acknowledge the historical exploitation and exclusion of women and nonwhite men; the erasure of the enormous contributions to society made by these groups despite their repression; and the reversal of the modest progress made toward rectifying longstanding wrongs. Radical, revolutionary, and revelatory ideas – and even ideas simply premised on equality and non-discrimination – are being purged from official records, government monuments, libraries, schoolbooks, classrooms, newspapers, and broadcast media and replaced with insipid, government-mandated, neo-Confederate propaganda – propaganda that has been poisoning the online marketplace of ideas for decades.

Curation as Right and Responsibility

This dangerous, anti-democratic moment is largely the predictable result of how Section 230 has been interpreted to scramble the rights and responsibilities of online platforms. The longstanding, constitutionally protected freedom of tech platforms to exclude harmful content is under attack, while the unjustifiable and unprecedented immunity for promoting it continues unabated. Two recent cases, however, provide some glimmers of hope and clarity.

The first is Moody v Netchoice (2024), which involved a challenge to the Florida and Texas social media laws mentioned above. Though the Supreme Court resolved the case on standing grounds, Justice Elena Kagan’s majority opinion affirmed that tech platforms, as private actors, have a First Amendment right to “curate” compilations of third-party speech as they see fit. This includes choices about what speech to leave out, meaning that the government cannot force a private actor to include speech in a compilation if it does not wish to: “To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.” This long-recognized aspect of First Amendment doctrine applies not only to the Florida and Texas laws but also to the attempts by the Trump administration to force platforms to host and promote speech against their will.

And while the holding in Netchoice does not directly address Section 230, it helps illuminate the justification for Section 230(c)(2), which shields providers and users of an interactive computer service from civil liability with regard to any action that is “voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” or “taken to enable or make available to information content providers or others the technical means to restrict access” to such material. Section 230 (c)(2)’s procedural protections serve as an important safeguard against efforts of government officials to interfere with the First Amendment rights of tech companies to ban, delete, or restrict content on their platforms.

The other side of the right to curate, however, is the responsibility for that curation. The interaction between the right and the responsibility for curation was made clear by the Third Circuit in Anderson v. TikTok, 116 F.4th 180 (3d Cir. 2024), a case explicitly focused on the scope of Section 230 immunity. The issue before the court was whether TikTok could claim immunity under Section 230(c)(1) for content that its curated algorithm showed to users (specifically a “Blackout Challenge” allegedly shown to a young girl who died attempting to replicate it).

Section 230(c)(1) states, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” As noted above, courts have interpreted this provision very broadly to provide online intermediaries immunity from liability for third-party content they provide or promote on their platforms, even if they know of its unlawful nature and take no reasonable steps to address it.

But the Third Circuit held that Section 230 did not provide TikTok with such immunity here. The court reasoned that Section 230(c)(1) immunizes interactive computer services “only if they are sued for someone else’s expressive activity or content (i.e., third-party speech), but they are not immunized if they are sued for their own expressive activity or content (i.e., first-party speech)” (emphasis added). If, as Justice Kagan asserted in Netchoice, platforms have a First Amendment right to curate content, then it follows that such curation “amounts to first-party speech under §230, too.” Accordingly, Section 230(c)(2) should not shield platforms from being held responsible for that curation.

Platforms should be held accountable for foreseeable harms to individuals or society, both as a moral matter and as a legal matter, if plausible causes of action exist. That may include liability for promoting lies that cause concrete injury; orchestrating or amplifying harassment campaigns to intimidate whistleblowers and dissenters; and assisting in punitive, discriminatory, or otherwise unconstitutional government action.

The Third Circuit in Anderson correctly concluded that the curation of speech – whether online or offline – is both a right and a responsibility. It thus rejected an interpretation of Section 230 that would protect the right but erase the responsibility. But the Third Circuit is an outlier on this issue; far too many courts have interpreted Section 230 in exactly the way Anderson rejects. For that reason, Section 230 should be amended to explicitly limit the protections of (c)(1) while holding fast to those afforded by (c)(2).

I have elsewhere suggested that Section 230(c)(1) immunity for content intermediaries choose to present or promote should only be granted when three conditions are met: one, when the content in question is speech, as opposed to conduct; two, when the speech is wholly provided by a third party, as opposed to being solicited or encouraged by the platform itself; and three, when the platform has not exhibited deliberate indifference to harm caused by that speech. These amendments would significantly narrow the scope of online activity for which platforms could disclaim responsibility, incentivizing them to act more responsibly, while continuing to shield them from liability for merely providing access to the speech of others.

But even a more minimal reform would improve upon the status quo. Section 230 immunity distinguishes between “interactive computer service providers” (ICSPs), which are eligible for immunity, and “information content providers” (ICPs), which are not. But because it is not entirely clear when a platform crosses the line from an ICSP into an ICP, and because most courts err on the side of treating platforms as ICSPs, a minor revision of the definition of an ICP could be helpful. Currently, Section 230 defines “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” The addition of solicitation and encouragement of information to this definition could provide a meaningful limitation to 230 immunity.

Curation as Opportunity

According to the defenders of the Section 230 status quo, any limitation of tech industry immunity will mean nothing less than the collapse of our democracy. The risk of costly litigation will crush smaller platforms and compel bigger ones to cave to political pressures, while courts will be overwhelmed by frivolous suits brought by the powerful and wealthy. In short, the digital public square will be reduced to a playground for corporations and politicians, and the most vulnerable among us will have even fewer options for dissent and defense.

But that dystopic future is already here. Thirty years of tech industry rights without responsibility has led us to this point: to placing the vast democratic potential of the most sophisticated information communication systems in history in the hands of a few wealthy white men; to shutting the courtroom door on those injured by corporate recklessness and all the knowledge their fight for justice could have revealed; to the brutal repression of ideas, art, speech, dissent, and democracy by the combined authoritarian forces of government and industry.

The word “curation” derives from the Middle English word curacioun, having to do with “curing, restoration to health, medical treatment” and the Latin cūrātiōn, meaning “superintendence, taking care, treatment of a disease or sick person, office to which duties are attached,” and curare, “to watch over, attend, treat (sick persons), restore to health.” Curation is a right, a responsibility, and an opportunity for platforms to promote the principles and values that can counter the destruction of our democracy. Tech platforms have tremendous potential to serve as sites of free speech, but they must own that speech, in multiple senses of the word, to do so.

Filed Under

, , , , , , , , , , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: