This fall, the Supreme Court announced that it would consider challenges to Texas and Florida laws that police how social media companies moderate content on their platforms.

As one federal appeals court put it, at their core, these laws “tell a private person or entity what to say or how to say it” by restricting the ability of platforms to exercise their editorial judgment, which is protected under the First Amendment. The Eleventh Circuit Court of Appeals struck down the Florida law on those grounds while another court, the Fifth Circuit Court of Appeals, disagreed and upheld a similar Texas provision. Now, the Supreme Court has stepped in to resolve the dispute, and its ultimate decision has implications that reach far beyond social media.

The cases also affect artificial intelligence, and how the government will be able to regulate it. Many proposed AI regulations impact the kind of content that models like ChatGPT can produce. If enacted, the regulations would likely require AI labs to change their content controls, which influence their model’s responses to user inputs. The Supreme Court’s decision in the social media cases will shape the kinds of arguments that labs can make against AI regulations by defining the limits of what is protected editorial discretion and what controls are subject to government oversight.

Different Approaches to Regulating AI Outputs

AI regulation is a hot topic in Washington as legislators from both parties and the Biden administration seek to make their mark in an emerging area of law. Though the discussion of AI has ranged from the development of artificial superintelligence to the need for enhanced export controls on AI chips sent to China, much of this regulatory attention will be geared toward controlling the kinds of outputs that AI can create. For example, in his Senate testimony this summer, Anthropic CEO Dario Amodei highlighted the risks of AI outputs ranging from disinformation to recipes for bioterrorism. Any regulation that seeks to tackle this kind of issue will inevitably have some effect on what kinds of content AI models are able to produce and share with their users.

Most leading AI companies have their own unique approaches to trying to ensure that their models will not produce harmful outputs. OpenAI uses “reinforcement learning from human feedback” (RLHF) to condition its chatbot models to avoid outputting racist, biased, or other content that OpenAI deems unacceptable. In RLHF, human workers judge AI outputs on scales of helpfulness, truthfulness, and harmlessness and compare outputs against each other to judge which one is better. The AI then incorporates this feedback when outputting future information. Anthropic takes a different approach, called Constitutional AI, that involves a set of principles, the “constitution,” that guide the outputs of its Claude models. Other AI labs believe that constraints on the outputs a model produces are unnecessary or even harm the model’s performance. For example, French AI startup Mistral released its Mistral 7b model without any apparent safety controls, leaving the model willing to provide instructions for murder and discuss ethnic cleansing, among other unsettling topics.

Regulation is needed to provide common standards around the release of new AI models and to define what is or is not harmful to national security and public discourse. But legal constraints on the scope of such regulation and how it will operate will shape how policymakers frame their solutions.

In particular, AI labs looking to avoid regulation will likely argue that laws seeking to affect whether and how their models can output different kinds of information in response to user prompts are violations of their editorial judgment, an argument similar to the one social media platforms are advancing at the Supreme Court in the Florida and Texas cases. Approaches like RLHF can be understood as a form of content moderation, in which AI labs seek to control what kinds of content their models output according to a set of principles that they deem best. Just as Facebook combines AI and human review to determine what kind of speech is allowed on its platform, AI labs use AI and human review to determine what kind of speech is allowed to come out of their models.

Editorial Discretion and the First Amendment in the Florida and Texas Cases

The Florida and Texas laws differ, but broadly present the same First Amendment issues, usefully summarized in the government’s Brief to the Supreme Court recommending that it consider these cases. The Supreme Court decided to take up Questions 1, “[w]hether the laws’ content-moderation restrictions comply with the First Amendment,” and 2, “[w]hether the laws’ individualized-explanation requirements comply with the First Amendment.”

The Florida law, S.B. 7072, imposes various requirements and constraints on a set of large social media platforms, including that platforms “apply censorship, deplatforming, and shadow banning standards in a consistent manner,” and limits the ability of platforms to “censor, deplatform, or shadow ban” journalistic enterprises and candidates for political office. It also places controls on the use of “post-prioritization,” or the ability of platforms to elevate or reduce the visibility of certain content, such as restricting its use for the accounts of political candidates and requiring that platforms give users the ability to opt out of the algorithm.

The Texas law, H.B. 20, also targets large social media platforms and, among other things, limits their ability to moderate content on their platforms. In particular, it prevents platforms from “censor[ing] a user, a user’s expression, or a user’s ability to receive the expression of another person” based on “(1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in [Texas].”

In the Florida case, the Eleventh Circuit held that these kinds of restrictions are an unconstitutional violation of the First Amendment, providing two bases for this finding. First, it wrote that “[t]he Supreme Court has repeatedly held that a private entity’s choices about whether, to what extent, and in what manner it will disseminate speech—even speech created by others—constitute ‘editorial judgments’ protected by the First Amendment.” Holding that the use of content moderation is a kind of editorial discretion, the Eleventh Circuit wrote that social media platforms’ decisions on how to moderate content convey some messages but not others, often based on explicit expressions of the values of the company. Thus, when a social media company chooses whether to allow or to remove Holocaust denial, for example, on its platform, it is conveying a message that is protected under the First Amendment. In the alternative, the Eleventh Circuit held that the social media platforms’ content-moderation practices constitute “inherently expressive conduct” protected by the First Amendment because they convey “some sort of message” to a reasonable observer. When a platform chooses to remove a post by a user, they are conveying a message about what kinds of posts they will allow and what they will disallow.

The Fifth Circuit disagreed, upholding Texas’ law against essentially identical challenges. With respect to the editorial discretion arguments, the Fifth Circuit found first that editorial discretion likely does not exist as an independent category under the First Amendment, but second that even if it does, then content moderation is not an example of it. The court wrote that editorial discretion requires that the entity exercising discretion “accept[] reputational and legal responsibility for the content it edits” and that the exercise of discretion through the selection and presentation of content occur “before that content is hosted, published, or disseminated.” In the Fifth Circuit’s view, because the social media platforms disclaim responsibility for content hosted on their platforms and perform review after content has been posted, they are not exercising editorial discretion. Commentators have criticized the Fifth Circuit’s opinion both for its reasoning and its potential impact, including by the dissenting judge in that case, who wrote that the majority’s focus on “censorship” by social media platforms obscured that their editorial choices are nonetheless speech, and so protected under the First Amendment, and that the Texas law does not withstand even intermediate scrutiny.

The Supreme Court will have to resolve this split and, in doing so, clarify the substance and bounds of editorial discretion as a doctrine. If it decides along the lines of the Eleventh Circuit, that choices about whether and how a company disseminates speech are protected as editorial discretion, then AI labs will have a stronger claim that their controls and limits on what kind of speech is output by models are protected by the First Amendment. On this view, when ChatGPT refuses to provide a user with a requested output that OpenAI deems to be harmful, the company is exercising editorial discretion to refuse to disseminate that speech. Because ChatGPT usually explains to users why it is refusing to answer their questions, the argument that this decision is the speech of the AI company is even stronger. If the Supreme Court decides to eliminate editorial discretion entirely, then the companies will lack this protection, but such a decision seems highly unlikely. On the other hand, if the Court uphold the Fifth Circuit’s new test for editorial discretion, the question may be close, as AI labs do disclaim responsibility for their content but also perform their selection and presentation of content before it is disseminated.

AI Models and Speech

It might be argued that AI labs are not exercising editorial discretion because they are not curating and combining the speech of others. However, labs could respond that they are doing so in three ways. First, generative AI requires and responds to the prompting of users, and what comes out of a model like ChatGPT is a combination of the model itself and the user’s engagement with it to draw out a particular response. If a user asks for a recipe to create a dangerous bioweapon, in making its response the AI takes its training data, its fine-tuning, and the user’s prompt and combines them together to generate an output, plausibly in a kind of curation of speech. While the most popular consumer models of generative AI take the form of conversational chatbots which seem to separate human prompt from AI response, the integration of AI into applications like Microsoft Word and Google Docs to help users draft prose and summarize notes will blur the line between user and machine, blending their words in a way that mimics the “selection and presentation” of speech protected even under the narrow interpretation of editorial discretion put forward by the Fifth Circuit.

Second, the labs might argue that RLHF and Constitutional AI involve the collection of the speech of human trainers who have evaluated the model and the curation of that speech by the company for fine-tuning. Thus, a regulation that requires changes in fine-tuning, whether directly or indirectly through the outputs, would require labs to change the mix of curated speech in a way interfering with editorial discretion. Third, labs could make a similar argument with respect to the underlying training data used to create models. The data used to train contemporary AI models, often the natural language of people on the internet, is selected by the labs and then mixed with their algorithms. Because different inputs lead to different outputs, controls on what inputs are used could be understood to interfere with the labs’ choice of what kinds of outputs are produced.

It is not clear how the Supreme Court will rule, and it is harder still to predict exactly how any new precedent would apply to AI. The Texas and Florida laws can be understood as “must carry” laws, requiring that social media platforms carry content that they do not want to, and those kinds of laws have been upheld in the past against First Amendment challenges. In Turner Broadcasting v. FCC, for example, the Supreme Court held that cable television providers must reserve a certain part of their channels for local broadcasts, applying only intermediate scrutiny. However, the AI labs might actually have stronger editorial judgment arguments compared to those advanced by the social media platforms because the information AI models output seems much more closely linked to the labs than the speech of social media users who merely share their views on the platforms. Facebook might argue that it is exercising its editorial judgment when it prevents users from spreading racist hate speech on its platform, and thus expresses its values as a curator, as the platforms are doing in the Texas and Florida cases, but Facebook’s expression is still mediated by the specific speech of the users that it is allowing or disallowing. On the other hand, OpenAI is much more directly choosing whether its models are speaking or not speaking in certain ways through its version of content moderation – the prompter’s speech is necessary to create the AI output, but the outputted speech is much more closely attributable to the AI labs themselves.

Courts should find certain levels of government control over AI content moderation unconstitutional under the First Amendment because they would threaten essential parts of political discourse. For example, values-driven approaches like the Anthropic Constitution incorporate a set of political ideas and ideals around topics like racism, property rights, and civil rights, that relate closely to the kind of company that Anthropic is and what it seeks to communicate to the world. Anthropic’s description of its Constitution emphasizes that it reflects a set of choices about values the company has used both to improve the user experience with the model but also to make it more safe and generally better, in the moral sense of that word. Though Anthropic also insists that it does not seek to “reflect a specific viewpoint or political ideology,” its choice of values and the use of documents like the United Nations Declaration of Human Rights as a foundation for its Constitution does express a political viewpoint that would be affected by governmental efforts to change what values guide its model. An effort to change Anthropic’s values would be viewpoint discrimination and likely would and should be unconstitutional.

On the other hand, the government does have a strong interest in ensuring that AI is safe and preventing the worst risks of AI deployment, including bioterrorism and other direct harms to people. While the editorial discretion cases are more likely to be raised in debates over social values and political discourse than this kind of safety problem because AI labs are unlikely to want to claim that they are expressing messages of support for terrorism, an expansive definition of editorial discretion might impinge on those kinds of regulations as well because they concern whether and how AI models disseminate speech.

The limits of editorial judgment are still unclear. Whether, and to what extent, content moderation like that exercised by AI labs is an exercise of protected editorial judgment is a question that will likely be decided in substantial part by the Supreme Court in the new social media content moderation cases. To be clear, not all AI regulations, even regulations of AI outputs, would fall afoul even of an expansive ruling in favor of the application of editorial discretion online. But as the technology continues to advance, the Court’s decision will inform how the government can regulate both social media content and the wave of AI technology unfolding behind it.

IMAGE: Futuristic circuit board on a dark blue background. (Photo via Getty Images)