Earlier this year, testifying before the Senate Judiciary Committee, FBI Director Chris Wray described the most pressing security threats facing his agency. Unsurprisingly, his opening statement led off with a mention of the Jan. 6 insurrection at the Capitol and the broader problem of rising domestic extremism. Yet Wray then devoted more than twice as many words—nearly a quarter of his prepared remarks—to the problem of strong encryption. (The new preferred label is apparently “user-only-access” encryption, a modest improvement over the tendentious “warrant-proof encryption.”) Wray would later attempt to link the two issues, insinuating that the failure to adequately prepare for the Capitol riot was attributable to the insurrectionists’ use of encrypted communications platforms.
These claims should be greeted skeptically.
Yet Another Cryptopocalypse
Some of the Capitol insurrectionists used encrypted messaging apps, as do millions of law-abiding Americans. But the planning for the violence of Jan. 6 was shockingly overt. Much of it occurred in plain view, via countless posts on public forums, which the law enforcement community either failed to notice or failed to take seriously. Nor is encryption just a convenient scapegoat for an intelligence failure—though it is that as well. Wray’s comments merely marked the continuation under the Biden administration of a war on strong encryption that the FBI has been waging for decades, opportunistically seizing on whatever security threat is most recently in the headlines to make the case.
Neither was there anything new in Wray’s ominous warning that we “are moving more and more in a direction where if we don’t come up collectively with some kind of solution … we will not be able to get access to the content and the evidence that we need to protect the American people.” Dubious predictions of an impending cryptopocalypse have been a mainstay of the Crypto Wars from the beginning: In 1992, FBI predicted that by 1995, nearly half of all phone calls wouldn’t be inaccessible to wiretappers thanks to widespread encryption. More recently, Wray’s FBI pressed the case for anti-encryption legislation using bogus statistics that massively inflated the number of smartphones law enforcement was thwarted from accessing by encryption. FBI showed no sign of being fazed by this embarrassing disclosure, nor did the Bureau’s allies in Congress, who last year introduced a bill mandating government backdoors in cryptographic algorithms.
Encryption in the Founding Era
A persistent theme of law enforcement’s anti-encryption rhetoric is the idea—sometimes explicit, sometimes merely implied—that widespread use of encryption represents a radical departure from the historical norm, the good old days when a lawfully authorized search of correspondence was guaranteed to yield something easily intelligible. On occasion, this argument even frames the Fourth Amendment as a kind of tacit quid pro quo: Citizens have a right against warrantless searches (with numerous exceptions), but when the government jumps through the necessary hoops to obtain a warrant, it is entitled not merely to conduct a search, but to succeed in obtaining what it seeks—and the law must ensure that technology cannot frustrate that guaranteed outcome.
Needless to say, this elides the myriad ways that modern law enforcement and intelligence agencies inhabit a Golden Age of surveillance, with countless investigative and monitoring tools their predecessors could only dream of. Even bracketing that convenient omission, however, this picture profoundly distorts history.
One useful corrective to that distortion is provided by a fascinating monograph published by the National Security Agency, “Masked Dispatches: Cryptograms and Cryptology in American History, 1775–1900.” As NSA’s in-house historian Dr. Ralph Weber memorably puts it, “America was born out of revolutionary conspiracy”—and the conspirators, America’s founders, saw encryption as an “essential instrument for protecting critical information in wartime, as well as in peacetime.” A resolution of the Continental Congress provided for encrypted communications when a document was “of such a nature as cannot be safely transmitted without cyphers”—no surprise in wartime—but the habit of routinely enciphering correspondence did not end with the revolution. “In the years after 1780,” Weber explains, “Jefferson, James Madison, James Monroe, and a covey of other political leaders in the United States often wrote in code to protect their personal views on tense domestic issues confronting the American nation.” The polymathic Thomas Jefferson even developed an early bit of cryptographic hardware, a cipher wheel to aid in the laborious process of decrypting and deciphering messages. Nor was the practice limited to statesmen: A Colonial Era primer for young men published by Benjamin Franklin included instructions on the use of codes and ciphers in letters (along with advice on accounting, carpentry, and dye-mixing).
The habit waned over time as postal service became more reliable—“the Founding Fathers were much more anxious than their successors to encrypt their confidential correspondence”—but in the early days of the republic, even after the British had been sent packing, encryption was the only viable way to guarantee the security of correspondence that might easily fall prey to an untrustworthy courier or interception on the roads. (As Jefferson put it to one plaintext correspondent,
“the infidelities of the post office and the circumstances of the times are against my writing fully & freely.”) The situation today is somewhat analogous: The Internet is a packet-switched “network of networks” that transmits data across systems owned by many different entities, and across many different legal jurisdictions, via a combination of wired and wireless connections, offering countless points at which data might be intercepted and, if unencrypted, read.
Ciphers in use at the time were crude by modern standards—the cheapest modern laptop would make short work of any of them—but many were as unbreakable in their era as the most sophisticated cryptographic algorithms are today. The Vigenère cipher, first described in 1553, was hailed as “le chiffre indéchiffrable” (“the indecipherable cipher”) and remained unbroken until computing pioneer and steampunk icon Charles Babbage mounted a successful attack fully three centuries later. (Since Babbage didn’t bother to publish his method, formal credit for cracking the Vigenère usually goes to Prussian cryptographer Friedrich Wilhelm Kasiski, who described an attack on the cipher in print a decade later.) As president, having mothballed his own quite impressive cipher wheel, Thomas Jefferson chose the Vigenère as the cipher to be employed by the Lewis and Clark expedition. Many enciphered 18th century texts therefore proved indecipherable until 20th century cryptanalytic techniques (and computing power) could be brought to bear—and indeed, some remain still unsolved.
As anachronistic as it might sound, Professor Orin Kerr argues in a recent Harvard Law Review article that constitutional law scholars can meaningfully speak of a “Decryption Originalism” rooted in the Founding Era’s experience with, and attitudes toward, encrypted communications. Kerr is concerned with the question—raised in the course of former vice president Aaron Burr’s treason trial, and freshly relevant once again—of whether the Fifth Amendment right against self-incrimination prevents courts from compelling the disclosure of the cryptographic key or passcode to an enciphered message. (Kerr’s short answer is “it depends,” and his nuanced analysis is worth reading in full.) We can, I believe, similarly speak of an “Encryption Originalism” that would illuminate potential constitutional barriers to the sort of “lawful access” mandates sought by the FBI.
Courts have often looked to historical practice as a guide to understanding the scope of constitutional rights. In McIntyre v. Ohio Elections Commission (1995), the Supreme Court invalidated an Ohio statute prohibiting the distribution of anonymous campaign literature, leaning heavily on the Founding Era practice of anonymous and pseudonymous pamphleteering—most famously in the form of the Federalist Papers, authored by Alexander Hamilton, John Jay, and James Madison using the collective pseudonym of “Publius.” Though the text of the First Amendment does not explicitly say whether the “freedom of speech” it protects includes the right to speak anonymously, our political traditions provide strong evidence of how the public at the time of ratification would have understood the phrase. In a paper published a few years after McIntyre, John A. Fraser III argued that the use of codes and cyphers to protect communications should similarly be viewed as an “ancient liberty” falling within the protection of the First and Fourth Amendment’s guarantees of free expression and privacy.
The Freedom of Encrypted Speech
The grounds for First Amendment protection are relatively straightforward, on at least two different dimensions.
First, computer code itself is a form of expression entitled to constitutional protection, as a federal district court found in 1996. In a case challenging export restrictions on encryption software, the court wrote: “Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it.” An encryption algorithm is ultimately a set of mathematical instructions for rearranging data—instructions that can be described abstractly, in English, in a computer science textbook, or in a form computers find easier to execute. In principle—if not in practice, except perhaps as a very nerdy stunt—a human being could execute any software’s cryptographic transformation of a text by hand with pen and paper. If the First Amendment prohibits the government from banning publication of a book explaining those instructions in English, it should similarly protect a software developer who wishes to distribute those instructions in machine-readable form.
Second, there are the First Amendment interests of the end user to consider. While proposed lawful access mandates are invariably directed at software developers and communications platforms, their ultimate purpose is to constrain the form users’ expression takes—to compel them, in other words, to express their ideas in a form intelligible to the government. The First Amendment defects with such a mandate if applied directly to communicants would be obvious. During World War II, the U.S. military employed Native American “code talkers” for communications security. Navajo (most famously, but also Choctaw, and Cherokee, and Commanche, among others) was sufficiently difficult to reverse-engineer that, with the Axis powers lacking access to Native American speakers, the Navajo-based code was for many purposes as good as a cipher—not to mention much faster. Yet nobody imagines the government could require individuals to communicate only in languages understood by FBI employees.
Obviously, prohibiting use of a natural human language for the convenience of law enforcement would be offensive and discriminatory in numerous ways that don’t apply to algorithmically mediated communication. But I believe a core element of what is repugnant in the idea remains even after those concerns are factored out: The presumption that the government may dictate to us the form our expression takes in order to ensure that expression is readily comprehensible to the government. Achieving this aim by means of a mandate on intermediaries or software developers obscures what’s going on sufficiently to dampen the visceral reaction we would have were the directive aimed at the individual.
Targeting software developers and communications platforms is a viable alternative to regulating the end user because the average citizen is not very good at writing code, remembering long numerical strings, or mentally multiplying large prime numbers rapidly—we need help from other people and machines to do those things well at a useful speed. But the fundamental goal is still to restrict the forms individual expression may take. Before those shortcuts were available, the Framers of the Constitution routinely expressed themselves in a form that would have been unintelligible to any government agent of their era—as illustrated by the need in the Burr case to seek the aid of Burr’s private secretary. Presumably they believed they had a right to do so.
Fourth Amendment Protections
The Fourth Amendment “encryption originalist” argument (my version of it, anyway) is less straightforward, because it proceeds from an understanding of the Fourth Amendment that can read like ciphertext from the perspective of contemporary jurisprudence.
The Fourth Amendment’s core guarantee is that “the right of the people to be secure … against unreasonable searches and seizures, shall not be violated.” Two of the key terms in that brief, critical clause are given surprisingly little weight in contemporary Fourth Amendment theory and case law alike: “people” and “secure.” In recent years, however, a number of legal scholars have begun arguing that taking these parts of the text seriously in their historical context yields a picture of the Fourth Amendment that diverges in important ways from the currently dominant reading. In what follows I draw heavily on arguments advanced in David Gray’s “The Fourth Amendment in an Age of Surveillance,” Luke Milligan’s “The Forgotten Right to Be Secure,” Jed Rubenfeld’s “The End of Privacy,” and entirely too many works to list by Thomas K. Clancy. The examples of Founding Era pamphlets condemning general warrants come primarily from William J. Cuddihy’s “The Fourth Amendment: Origins and Original Meaning, 602–1791.”
The Right of the People
Start with “the right of the people.” The Framers were, as a rule, fairly deliberate about assigning constitutional rights, powers, and duties to their respective bearers. They knew how to characterize a purely individual right—“no person shall be held to answer for a capital, or otherwise infamous crime”— but settled on “the people” as the bearers of the Fourth Amendment’s right “to be secure.” While this should not, of course, be understood as a denial that the Amendment creates an individual right, the choice of the collective noun suggests an additional dimension, perhaps reflecting the view that “unreasonable searches and seizures”—or the general warrants prohibited in the Amendment’s second clause—inflict harms on the polity as a whole above and beyond the injury to individuals unreasonably searched and seized.
We’re already accustomed to thinking of other rights with this mix of individualistic and collective rationales. The Second Amendment’s right to bear arms—another right of “the people”—is justified not merely in terms of gun owners’ individual interest in self defense or sport, but also in terms of the value to “the security of a free State” in having enough of the population armed to be able to serve as an effective militia. This dual structure is perhaps most familiar in the case of the First Amendment’s protection of freedom of speech, which is widely understood to have both an individual and a collective or structural component. I have an individual right to free speech because citizens are all fundamentally equal, and you cannot respect people as equals if you forbid them from expressing ideas core to their identity, even if those ideas seem worthless or even harmful. But there is also the structural rationale: We all live under rules that emerge from democratic deliberation, and so even if I have nothing controversial to say, my freedom depends on diverse perspectives, including criticism of popular views and officials, being freely aired. The collective interest in expressive rights is reflected in the willingness of courts in First Amendment cases to give weight to the potential “chilling effects” of laws or policies regulating speech—that is, the effect on people who never suffer the direct injury of a government penalty on speech, because they are deterred from speaking.
Similarly, David Gray argues that the Fourth Amendment “is concerned primarily with policies and practices, such as general warrants and writs of assistance. Individual cases may provide examples of those kinds of policies and practices in action, as did the search of John Entick’s home in the general warrants cases, but the primary concern is the threat against the right of the people these instances represent.” Under standing rules shaped by a purely individualistic conception of the Fourth Amendment, “individual litigants have a hard time challenging programs and policies,” such as programs of large-scale data collection or electronic surveillance. This, Gray argues, inverts the priorities of the Framers, whose assignment of the “right to be secure” to “the people” as a whole “bespeaks a founding-era understanding that security from unreasonable search and seizure is linked to collective projects of self-governance.” Consider, by way of illustration, the argument advanced by the late William J. Stuntz that the Framers understood the Fourth Amendment as, in part, a kind of structural backstop or failsafe meant to complement the First Amendment’s protections: Congress might wish to disregard the injunctions of the First Amendment and erode guarantees of press freedom, but the inability to freely enter homes or search through personal papers presents a formidable practical obstacle to fully regulating unpopular ideas or faiths.
The Right To Be Secure
Now turn to the right “to be secure.” In practice, most modern jurisprudence treats this phrase as surplusage, constitutional noise adding nothing to the meaning of the Fourth Amendment. Courts effectively read that clause as though it said simply that the right “against unreasonable searches and seizures” shall not be violated. But if we wish to take the text seriously, we should assume that word is in there for a reason—that a “right … to be secure … against unreasonable searches” means something different—though perhaps subtly different—from a “right … against unreasonable searches.”
In our ordinary, contemporary linguistic practice, “security” encompasses more than the mere absence of breach. A bank vault or a computer system that has suffered an actual invasion is, self-evidently, insecure. The converse, however, is not true: A vault, home, or computer system may be “insecure” without having suffered an actual breach. When we ask whether a facility is “secure,” we are not normally asking merely whether a breach has occurred, but whether mechanisms are in place that render the facility reasonably free from the danger of a successful breach.
To understand what a right “to be secure” might mean in the context of the Fourth Amendment, however, we can’t restrict ourselves to contemporary usage, but need to consider how the term was understood at the time of ratification. Samuel Johnson’s “Dictionary of the English Language” offers several meanings for the word “secure,” of which the most obviously pertinent to the Fourth Amendment include “free from fear,” “sure, not doubting,” and “free from danger, that is, safe.” While these meanings have largely remained stable to the present day, some formerly common usages of the word ring somewhat oddly in modern ears. A 17th or 18th century writer might use the word “secure” or “security,” on its own, to refer to the psychological state of ease that a modern speaker would more commonly express as “feeling secure.”
In Shakespeare, for instance, we find the ghost of Hamlet’s father—clearly a victim of a false sense of security—describing the circumstances of his murder:
Brief let me be. Sleeping within my orchard,
My custom always of the afternoon,
Upon my secure hour thy uncle stole,
With juice of cursed hebona in a vial
We find a similar usage in a letter of 1806 from the Scottish linguist Alexander Murray to publisher Archibald Constable. Ruminating on the possibility of an attack by Napoleon Bonaparte, Murray warns that “we have no cause to be too secure.” Alexander Hamilton cautions in Federalist 24 against “an excess of confidence or security,” despite the vast ocean separating the United States from European depredation. A modern writer would more probably have said “we have no cause to feel too secure,” and a modern reader might regard the idea of an “excess of security” as paradoxical.
To be “secure against” unreasonable searches, I suggested above, implies something more than the mere absence of such searches—some further facts or mechanisms in virtue of which it is reliably assured that no unreasonable searches will occur. If we understand “secure” in this partly-subjective sense, these mechanisms should tend not only to eliminate unreasonable searches themselves, but also render law-abiding citizens “free from fear” that they will be subject to unreasonable searches.
The tendency of a broad and discretionary search authority to undermine the “security” of the people was a frequent theme of the colonial jeremiads against general warrants that both motivated and inspired the creation of the Fourth Amendment. The word “secure” and its derivatives recur often, typically coupled with the idea that practices of promiscuous search inflict a sort of collective injury. In the widely-circulated “Boston Pamphlet” of November 20, 1772, Boston’s town committee blasted promiscuous searches by customs officers who were “by their Commission, invested with Powers altogether unconstitutional and entirely destructive to that Security which we have a right to enjoy.” Colonists’ homes were “exposed to be ransacked,” leaving them “cut off from that domestic security which renders the Lives of the most unhappy in some measure agreeable.” Note that it is the state of being “exposed to be ransacked”—not necessarily the actual ransacking—that spoils the sense of “domestic security.”
Like other tracts of its kind, the Boston Pamphlet has plenty of unkind words for the “Wretches” entrusted with carrying out customs searches and their penchant for “wanton exercise” of their authority. But the objection here is not only to the disruption and offense inflicted on individual citizens during individual searches, but to the anxiety and loss of dignity inflicted on the community as a whole by the mere existence of a discretionary search power “more absolute and arbitrary than ought to be lodged in the hands of any Man.” This view of promiscuous search authority as a threat to collective “security” probably finds its most succinct and explicit expression in an editorial historians attribute to James Otis, the legendary lawyer who (unsuccessfully) argued against the writs of assistance in Paxton’s Case. Condemning general warrants, Otis wrote:
[E]very householder in this province, will necessarily become less secure than he was before this writ had any existence among us … Will any man put so great a value on his freehold, after such power commences as he did before? [emphasis in original]
We can infer from these brief lines several things about the concept of “security” employed by Otis, whose great admirer and protégé John Adams would later write it into the state constitutional provision upon which the Fourth Amendment was based. First, it is collective: It is every householder who is rendered less secure by discretionary search powers. Second, it is a function of general legal structures: Particularly “wanton” instances of abusive searches may, of course, lead people to feel more or less secure at any given time, but the existence of a general class of formal search authorities implicates security even prior to the execution of any particular searches. Third, it is a hybrid that encompasses both the objective and subjective senses of “security” discussed above. Householders under a regime of general warrants will be objectively more likely to experience a search, but even those who do not will experience a pervasive background anxiety because of the constant threat.
While he never shies from denouncing the conduct of the customs agents who executed writs of assistance, Otis’ core argument is grounded in the Lockean notion that no valid law, consistent with “universal reason,” could give some individual members of society such significant discretionary power over others: “an Act against natural Equity is void: and if an Act of Parliament should be made, in the very Words of this Petition, it would be void.”
A nearly ubiquitous refrain in the literature opposing general warrants emphasizes that they “destroy utterly the notion of a man’s house being his castle”—or as John Dickinson’s “Farmer’s Letters” put it “a place of perfect security.” Echoing William Pitt’s famous pronouncement that “[t]he poorest man may in his cottage bid defiance to all the forces of the Crown,” the formula of the home as “castle” implicitly recognizes that the value of common law barriers to discretionary intrusion did not consist merely in avoiding the practical inconvenience and embarrassment of a physical invasion. The public affirmation of a principle of inviolability was also an affirmation of a form of civic equality, a component of the subject or citizen’s self-image that served as a social basis of self-respect.
A lament by the “Freeman” captures this sense that the intimate sphere is degraded—in a sense contaminated—by the knowledge that, far from being sacrosanct, it remains inviolate only at the whim of (low-status) customs inspectors: “What are the pleasures of the social table, the enlivening countenances of our family and neighbors in the fire circle or any domestic enjoyment if not only Custom House Officers but their very servants may break in upon and disturb them?” Arthur Lee, under the pseudonym “Junius Americanus,” similarly protested that British authorities had “laid open every man’s house in America to a General Warrant and left his property at the mercy of every infamous informer.”
James Pemberton, writing on behalf of Philadelphia’s Quakers, argued that “in any free country” the powers granted by a general warrant “would be reprobated as over-turning every security that men can rely on,”—a phrasing that emphasizes the pervasive uncertainty that such powers create. Pemberton had good reason to be attuned to this dimension of security: Members of his mistrusted religious denomination had just been subjected to indiscriminate seizure of books and private papers—a species of invasion that evoked special horror— “upon a bare possibility that something political may be found.” Though Pemberton was objecting in the instance to the harassment of his own religious community, he insisted that no man could “think himself safe, from the like, or perhaps more mischievous effects, if a precedent of so extraordinary a nature be established by tame acquiescence in the present wrong.”
I belabor these examples (at perhaps excessive length) because a Fourth Amendment analysis of lawful access mandates for encrypted communications looks very different if we accept that the right “to be secure” is not some mere rhetorical flourish, but rather that it incorporates an important concept that permeated Founding Era discourse around general warrants and government searches. This concept of “security” is both partly collective and partly subjective: It is the confidence of the polity as a whole that certain protected spaces are inviolable, and can be injured by the existence of authorities licensing discretionary searches, even in advance of an actual search.
A law requiring communications platforms and software developers to retain backdoor keys to users’ encrypted communications is not, at least intuitively, a search or seizure of the users’ communication—or even an authorization to search. The mandate to developers and platforms would enable searches subsequently authorized via the normal warrant process, but would not, on most people’s understanding, constitute a search in itself—reasonable or otherwise. The Fourth Amendment may nevertheless have something to say about backdoor mandates, however, if we understand the “right of the people to be secure” against unreasonable searches as something broader in scope than an individual right against the execution of unreasonable searches.
Cybersecurity experts have been virtually unanimous in arguing that backdoor mandates reduce the security of communications for all users by introducing new vulnerabilities and attack surfaces, which state and non-state actors alike may seek to exploit. Any centralized repository of cryptographic keys—whether held by the government or by developers and platforms in anticipation of a court order to produce them—would create an attractive target for sophisticated attackers (whether outside hackers or corrupt insiders). Such a target simply does not exist in the case of true end-to-end encryption, where keys are stored only on millions of individual devices.
The need to provide a mechanism for surreptitious backdoor access would also seriously complicate the already Herculean task of producing stable and secure software. The most advanced technology companies on the planet—titans like Microsoft and Google—do not know how to reliably produce fully secure software. New security updates and patches are regularly released because new flaws and vulnerabilities are constantly being discovered. This task becomes far harder, however, when the software is required to include a mechanism for surreptitious breach—that is, a way that the government can obtain a user’s unencrypted messages without the user becoming aware of it. Companies spend enormous effort working to block the myriad of means by which an attacker might access the contents of communications—such as “man in the middle” attacks, wherein attackers interpose themselves in a communication stream and effectively “impersonate” both parties to a communication in order to vacuum up the messages before passing them along. The logic of backdoor mandates requires that developers build in countermeasures to their own countermeasures—mechanisms for circumventing the code that otherwise functions to alert users to an unauthorized access attempt. This necessarily renders the systems less secure than they would be absent such a mandate.
The prospect of government surveillance tools being repurposed by hostile parties is by no means hypothetical. In Greece in 2004–2005, telecommunications switches designed with a “lawful access” function for government wiretaps were hijacked—probably by the National Security Agency—and used to spy on the cell phones of more than 100 Greek public officials. More recently, the tables were turned on NSA when a suite of the agency’s most advanced digital access tools were stolen from a staging server and eventually leaked by a group calling itself the “Shadow Brokers”—after first being used in the wild for some 14 months. One of those tools, an exploit known as EternalBlue, was later repurposed for the ruinous WannaCry ransomware attack, which affected hundreds of thousands of computers and inflicted an estimated $4 billion in damages.
Best practices in information security are constantly evolving to address new threats, and backdoor mandates at best contemplate this evolution—and at worst bar it entirely. One group of prominent cybersecurity experts has explained the potential implications of lawful access mandates for the rapid adoption of a security principle known as “perfect forward secrecy”:
With forward secrecy, a new key is negotiated with each transaction, and long-term keys are used only for authentication. These transaction (or session) keys are discarded after each transaction — leaving much less for an attacker to work with. When a system with forward secrecy is used, an attacker who breaches a network and gains access to keys can only decrypt data from the time of the breach until the breach is discovered and rectified; historic data remains safe. In addition, since session keys are destroyed immediately after the completion of each transaction, an attacker must interject itself into the process of each transaction in real time to obtain the keys and compromise the data.
The security benefits make clear why companies are rapidly switching to systems that provide forward secrecy. However, the requirement of key escrow creates a long-term vulnerability: if any of the private escrowing keys are ever compromised, then all data that ever made use of the compromised key is permanently compromised. That is, in order to accommodate the need for surreptitious, third-party access by law enforcement agencies, messages will have to be left open to attack by anyone who can obtain a copy of one of the many copies of the law enforcement keys. Thus all known methods of achieving third-party escrow are incompatible with forward secrecy.
We can abstract away from the details of forward secrecy to frame the problem somewhat more generally: Cybersecurity is an arms race. It is adaptive. Attackers innovate constantly, discovering new vulnerabilities and methods of attack, forcing defenders to respond by innovating in turn. Systems, devices, and networks that are adequately “secure” according to our best available knowledge about the current threat environment may prove to be woefully insecure tomorrow. Backdoor mandates would profoundly disrupt this process: Developers could not move to address a newly realized vulnerability, or adopt an emerging consensus about a novel best practice, without first ensuring that the latest patch did not break the lawful access mechanism. We should expect such conflicts to arise regularly because the goals of security and lawful access are inherently at odds: One seeks to eliminate the possibility of surreptitious access unauthorized by the user, the other, to guarantee it.
Perhaps most importantly, however, strong encryption establishes a structural barrier—a form of collective security—against indiscriminate bulk collection of communications. The paradigmatic “unreasonable search” in the Founding Era—the chief evil the Fourth Amendment aimed to avoid—was the general search, conducted at the discretion of a government agent without restriction to particular persons, places, and purposes. In the absence of encryption, modern telecommunications architecture coupled with sophisticated network monitoring tools has made it technologically possible, for the first time in human history, to conduct near-universal searches of correspondence. As a result of disclosures by former NSA contractor Edward Snowden, we know that for many years, the U.S. intelligence community engaged in just such large-scale collection, in cooperation with the British Government Communications Headquarters. Under a program codenamed MUSCULAR, vast quantities of data were obtained from the (then) unencrypted private data links connecting the overseas data centers of massive technology companies like Yahoo and Google—circumventing the procedural rules that would constrain domestic collection under the Foreign Intelligence Surveillance Act. The data was then collegially sent for storage at NSA’s Fort Meade data warehouses. There is, at the very least, a legitimate question of whether programs like MUSCULAR violate the Fourth Amendment rights of U.S. persons swept up in the process. But the combination of strict standing rules and state secrecy make it virtually impossible to actually test that question before a neutral magistrate. The only practical “security” citizens can enjoy against such general searches is cryptographic.
On an understanding of the Fourth Amendment that takes “the right of the people to be secure” seriously, these considerations may not merely constitute strong policy reasons to eschew mandatory “lawful access” to encrypted communications, but may have constitutional significance as well. Encryption originalism views strong encryption not as some novel threat to the public order, but as the modern reemergence of a common Founding Era practice of employing ciphers—often unbreakable by the authorities of the day—to secure personal communications. Now, as then, it is a practice profoundly important to a system of ordered liberty, a practical guarantor—and often the only meaningful guarantor—of the freedom from indiscriminate or discretionary monitoring necessary to a self-governing polity.