On Monday, the Supreme Court will hear argument about whether to uphold a vaguely worded order from the Fifth Circuit that arguably bars White House officials and several executive branch agencies, including the FBI, from urging social media companies to take down disinformation and misinformation that drives political violence, undermines democratic processes, and makes our nation less secure.

As the Court considers Murthy v Missouri—just as it should in considering the NetChoice “anti-censorship” cases argued last month—it must recognize not only the substantial national security and public safety harms from disinformation and extremist content on social media, but also the necessity for government officials to be able to communicate freely with social media companies about the abuses of their services by malign actors. And that includes the government urging those platforms to take action.

The case before the Court on Monday was brought by five social media users and the states of Missouri and Louisiana, who alleged that the government had engaged in a sprawling campaign to threaten social media companies into removing or suppressing content expressing disfavored viewpoints, particularly related to the COVID pandemic, vaccines, and election fraud.

The district court and the U.S. Court of Appeals for the Fifth Circuit agreed, holding that certain federal officials likely had “coerced” the companies to remove disfavored speech by threatening adverse action if they did not, or had “significantly encouraged” them to remove such content, either way running afoul of the First Amendment. For example, the Fifth Circuit concluded that the FBI—by warning social media companies of “hack and dump” operations by foreign “state-sponsored actors” that would spread misinformation about the 2022 midterm elections on their platforms—had likely engaged in prohibited coercion not because the bureau’s warnings conveyed any actual threat of adverse action from failure to take down the misinformation, but because of the inherent coercion that comes from a request from law enforcement.

The Fifth Circuit also concluded that the FBI likely had “significantly encouraged” social media companies to moderate content through what the court determined was “entanglement” in the companies’ decision-making processes. This entanglement apparently arose from the FBI’s recommendations to companies about how their moderation policies could be modified to better address the ways their platforms were being used to spread dis- and misinformation. Again, even without evidence of any threats made to the social media companies, the Fifth Circuit determined that the FBI “commandeered” those companies’ moderation policies, so that any decisions to remove content under those policies were the result of “significant encouragement” by the FBI, in violation of the First Amendment.

The result: a broadly worded ban on the FBI and certain other government officials and agencies from “directly or indirectly” “coerc[ing] or significantly encourag[ing] social-media companies to remove, delete, suppress, or reduce” social media content containing protected speech.

What does this mean in plain English? If upheld by the Supreme Court, it means we are more vulnerable to disinformation and less safe as individuals and as a country. As a group of former national security officials (including me) wrote in an amicus brief filed in the NetChoice cases, “Although . . . information warfare is nothing new, the rise of social media and other online platforms has created a novel and more treacherous battlefield in the war, where foreign adversaries and other malign actors can spread disinformation, propaganda, and recruitment materials far more widely and effectively than ever before. These efforts increase political polarization, sow discord, generate mistrust in governments and institutions, and undermine the national security of the United States.” This concern was echoed by the Chairman of the U.S. Senate Select Committee on Intelligence, Senator Mark Warner, who warned, in an amicus brief in Murthy, that “[f]oreign governments, including Russia, China, and Iran, are experienced in operating influence campaigns over social media, and there is no evidence they intend to stop. Foreign malign influence campaigns, including election influence campaigns in 2024, will only continue to grow in number, scope, and intensity.”

To effectively combat this assault requires the government, academics, private researchers, and social media companies to work together. Yet the capacious interpretations of “coercion” and “significant encouragement”—the key terms used in the preliminary injunction issued by the Fifth Circuit—have already chilled the sharing of information between the government and social media companies, even while the injunction itself is on hold during the pendency of the case before the Supreme Court. After the Supreme Court suspended the injunction and granted certiorari, FBI Director Christopher Wray testified in October that the FBI was “having some interaction with social media companies, but all of those interactions have changed fundamentally in the wake of the [Fifth Circuit’s] ruling.” At that same hearing, Department of Homeland Security Secretary Alejandro Mayorkas said that DHS had stopped participating in periodic meetings with tech companies to share information about the “threat environment that the homeland face[s].” In a universe in which agencies like the FBI and DHS can be held responsible for “coercion” and “significant encouragement” simply by virtue of their status as law enforcement entities even where no threats of adverse action are made (and even where former social media company officials deny taking action due to coercion or significant encouragement as the Fifth Circuit defined it), it is understandable that these agencies would exercise extreme caution going forward. That caution has significant national security consequences.

The social media companies themselves have recognized their limitations when it comes to detecting influence operations organized outside of their networks. The head of Meta’s security policy has explained that before the 2020 election, tips from law enforcement enabled it to dismantle covert influence operations based in Russia, Mexico, and Iran. Meta’s most recent adversarial threat report noted that foreign influence campaigns gain the most traction “when they manage to co-opt real people—politicians, journalists or influencers—and tap into their audiences.” The targets are on both sides of the political aisle, as reflected in the National Intelligence Council’s recently released assessment of foreign threats to the 2022 midterms. But even as we can expect more of this during the 2024 presidential campaign, and even as a new study shows China intensifying efforts to conduct “disinformation campaigns aimed at sowing division within U.S. society”—moving it, as the U.S. Intelligence Community warns, “closer to Moscow’s playbook for influence operations”—the government holds back, chilled by the Fifth Circuit and the looming Supreme Court decision.

Our national security cannot afford this reluctance to share information that social media companies need and want to curtail abuses of their services from bad actors—especially foreign bad actors—out to harm Americans and America. Meta’s chief of global threat intelligence has publicly acknowledged that the company “believe[s] that it’s important that we continue to build on the progress the defender community has made since 2016 and make sure that we work together to keep evolving our defenses against foreign interference.” Disinformation and misinformation spread through social media divides us and weakens us. That is a national security threat the Supreme Court cannot ignore.

Mary McCord is one of the counsel representing former superintendent of the New York Department of Financial Services Maria Vullo in National Rifle Association v. Vullo, scheduled for argument the same day as Murthy. That case is about the right of government officials to enforce the law and to speak out about matters of public concern.