When asked about Russian election interference during his congressional testimony last month, Robert Mueller said: “They’re doing it as we sit here.” We now know that part of that interference includes a sophisticated disinformation campaign using social media platforms. To defend the nation against information warfare, the U.S. government has adopted a policy—by default, not by design—of relying on the private sector to police itself, with limited behind-the-scenes government assistance. We do not know how well that policy is working. Congress should obtain information from Facebook, Google and others to find out.
Facebook’s website says: “Our detection technology helps us block millions of attempts to create fake accounts every day and detect millions more often within minutes after creation.” The company shut down more than 2 billion fake accounts in the first quarter of 2019. Similarly, Twitter closed about 70 million fake accounts between May and July of 2018. Google removed numerous “spam subscriptions” from YouTube in December 2018.
These numbers sound impressive, but they do not tell the whole story. To assess the effectiveness of company defenses, we must distinguish among three types of fake accounts: bots, fictitious user accounts and impostor accounts. Bots are automated accounts that operate without significant human intervention after the initial programming. Fictitious user accounts are non-automated accounts created in the name of a fictitious person who pretends to be a U.S. person. Impostor accounts are non-automated accounts created in the name of an actual U.S. person that are operated by a foreign agent who has stolen the identity of that U.S. person. Russian agents have created and operated all three types of accounts.
From the perspective of a foreign agent, it is very easy to create both bots and fictitious user accounts, but somewhat more difficult to create impostor accounts. From the standpoint of social media companies, it is relatively easy to detect and block bots, because bot detection can be automated to a considerable extent. It is more difficult to detect fictitious user accounts, and impostor accounts are the most difficult to detect. When Facebook reported that it shut down more than 2 billion fake accounts in the first quarter of 2019, the company did not distinguish among bots, fictitious users and impostors. Regardless, it is fair to assume that the vast majority of those 2 billion fake accounts were bots, because it is much easier for companies to identify bots than it is to identify impostors or fictitious users.
What would be an effective defense against fictitious user accounts? The answer hinges on industry response times. Assume that foreign agents have the capacity to create one new fictitious user account every hour. If it takes social media companies, on average, three months to identify an account correctly as a fictitious user account, then the companies do not have an effective defense against fictitious users. In this type of cat-and-mouse game, Russians and other foreign agents are clearly the winners. On the other hand, if companies can, on average, correctly identify an account as a fictitious user account and block the account within minutes after it is created, then they would have an effective defense against fictitious users.
In theory, Congress could enact legislation to create a social media registration system that would significantly reduce company response times for both fictitious user accounts and impostor accounts. The most effective system would require all persons who operate social media accounts—including individuals, companies and other entities—to register either as domestic users or foreign users. For maximum effectiveness and efficiency, the U.S. would need to persuade key allies to implement similar registration systems to discriminate between domestic and foreign users in their countries.
Additionally, a rigorous U.S. system would include: a requirement for all registered domestic users to disclose identifying information to social media companies; rules for information sharing between companies and the FBI; and procedures for FBI verification to confirm that all registered domestic users are actual U.S. persons, not fictitious users. Companies would be required to block or close any account that the FBI identifies as a fictitious user account. This type of statutory verification scheme, if implemented properly, would make it practically impossible for foreign agents to create new fictitious user accounts. Other measures would be necessary to provide an effective defense against impostor accounts, but those details need not concern us here.
The core elements of a social media registration system—disclosure of identifying information directly to social media companies and indirectly to the FBI—raise significant privacy concerns. Indeed, for some readers, the words “social media registration system” may conjure an image that Big Brother is watching you. At least since Edward Snowden leaked a trove of secret documents to the media in 2013, Americans have been on notice that the government exploits modern communications technologies to collect vast amounts of information about U.S. citizens. Thanks to Snowden and other notorious incidents—think of Cambridge Analytica and the Equifax data breach—several states have enacted legislation to provide enhanced protection for informational privacy and data security. The very idea of a social media registration system seems to fly in the face of recent efforts to augment legal protections for individual privacy. Therefore, if Congress decides to create some type of social media registration system, the legislation should include the most rigorous possible protections for both data security and informational privacy.
Setting aside privacy concerns, any good constitutional lawyer could raise several First Amendment objections to the validity of legislation creating a social media registration system. For example, the Supreme Court has often invalidated laws that “chill” too much constitutionally protected speech. Let’s assume that a social media registration system would include the most stringent possible measures to protect data security and user privacy. Even so, some current social media users might conclude that those protective measures do not provide adequate safeguards.
If the law required people to disclose identifying information to social media companies, and it required companies to share that information with the FBI, some people would probably decide to stop using social media to avoid the required disclosures. If the disclosure requirements apply only to people who engage in electoral speech on social media, some people would maintain their social media accounts, but refrain from engaging in electoral speech on social media to avoid the disclosure requirements. (Under this variant, Congress would need to draw a statutory distinction between electoral speech and general political speech, similar to the current statutory definition of “electioneering communications.”) In short, even with the most rigorous possible safeguards, a social media registration system would almost certainly have a chilling effect on speech on social media. It is hard to predict how the Supreme Court would rule if the Court were asked to decide whether that chilling effect renders the statute unconstitutional under the First Amendment.
Would the benefits of a social media system outweigh the costs? The answer depends partly on the resolution of two major uncertainties. First, we do not know whether, or to what extent, Russia’s social media campaign succeeded in influencing actual electoral outcomes in the 2016 Presidential election. If Russian interference was largely unsuccessful, then intrusive government regulation of social media would be unwarranted. However, if Russia actually altered electoral outcomes in 2016, and if there is a significant risk that foreign interference could alter electoral outcomes in the future, then a social media registration system might be justified.
The second major uncertainty is that we do not know the extent to which companies have succeeded in reducing their response times. Information warfare is like a game that involves moves and counter-moves. If each company’s counter-move takes 10 times longer than each Russian move, the Russians are winning. If companies have reduced their response times so that they can implement counter-moves as quickly as Russia executes its initial moves, then we have a strong private defense against information warfare. As of now, companies have not disclosed sufficient information to enable Congress to determine whether companies have reduced their response times to the point where we have an effective private defense against information warfare.
Congress can and should require social media companies to provide detailed information about their current response times for detecting and closing bot accounts, fictitious user accounts and impostor accounts. The information should address each category separately to provide an accurate assessment of industry response times. After collecting that information, Congress could make a better informed judgment about the costs and benefits of different legislative options for regulating (or not regulating) social media companies to defend against information warfare.