Surely without a hint of irony, just a day after WikiLeaks dumped a vault-load of documents detailing the Central Intelligence Agency’s use of hacking tools and software exploits, FBI Director James Comey told an audience at a Boston College conference on cybersecurity that “[t]here is no such thing as absolute privacy in America.” Comey’s elevator pitch in support of his claim was that “there is no place outside of judicial reach,” citing the fact that even time-tested testimonial privileges of the spousal, clergy–penitent, and attorney–client sort can be pierced by judges in “appropriate” circumstances. Comey’s argument, which he’s made at a steady drumbeat for several years now, is that sure, privacy is important, but law-enforcement access is paramount. The government and judges, not technology, should decide when the government can get to your private information.
If only things were that simple. Comey has at various times tried to disclaim any desire to have Congress mandate “backdoors” to encryption-enabled devices and services, even getting himself laughed off of C-SPAN when he suggested that his approach would provide a “front door” instead. When it comes to encryption, doors are doors, and—as Julian Sanchez comprehensively explained more than two years ago, at the dawn of the Crypto Wars sequel—they are a truly terrible idea. To briefly recapitulate Julian’s post: “it is damn near impossible to create a security vulnerability that can only be exploited by ‘the good guys’”; “there are lots of governments out there that no freedom-loving person would classify as ‘the good guys’” (an observation that takes on a chilling new cast in light of recent events); “any backdoor or retention mandate both implicitly assumes and, if it is to be effective, must effectively encourage centralized over decentralized computing and communications architectures”; and even if encryption really is law enforcement’s digital-age bête noire, it is a small price to pay in the “Golden Age of Surveillance.”
So what does this all have to do with the Vault 7 leak? It’s a fair question. Software exploits of the type disclosed by Wikileaks and encryption backdoors might both technically be lines of computer code, but the stakes surrounding each are distinct. For the reasons Julian put forward (and more), encryption backdoors should be a complete non-starter. Mandating backdoors would present a grave security threat to critical internet infrastructure. As a quartet of leading security researchers put it in a highly regarded paper in 2014, mandating built-in encryption backdoors amounts to “intentionally and systematically creating a set of predictable new vulnerabilities that despite best efforts will be exploitable by everyone.”
When law enforcement or intelligence agencies exploit existing security vulnerabilities, things are perhaps less clear cut. Unlike with backdoors, not every exploit of a software vulnerability poses a systemic risk. (While a backdoor to the iPhone would put a hole in every pocket, the targeted deployment of an exploit would not.) Still, many vulnerability exploits have widespread consequences, putting internet security at risk. As the security quartet put it, the “danger of proliferation means each use of an exploit, even if it has previously run successfully, increases the risk that the exploit will escape the targeted device.” Call it the Jurassic Park Rule of Internet Security:
Jim, the kind of control you’re attempting simply is . . . it’s not possible. If there is one thing the history of internet security has taught us it’s that vulnerabilities will not be contained. Vulnerabilities break free, they expand to new territories and crash through barriers, painfully, maybe even dangerously, but, uh . . . well, there it is. . . . I’m simply saying that vulnerabilities . . . find a way.
For example, despite reportedly rigorous testing before deployment, the Stuxnet worm used by the United States and Israel to attack an Iranian nuclear facility unexpectedly spread to non-target computers. And when the government sits on a zero-day exploit to be able to exploit it later, there is always the chance that an adversary is doing the same thing. These risks are, for the most part, inherently unknowable beforehand.
While it’s true that there are unknown risks associated with both exploits and encryption backdoors, only the latter amount to deliberately introduced vulnerabilities. Nevertheless, Comey has been quite skeptical of the notion that giving the government a golden key into the encrypted devices of millions of users would present a broad threat to the security of the internet. His theory, after all, is that the government—with judges as gatekeeper—will use such a key responsibly and with oversight. But Vault 7 is a visceral reminder that the public can’t trust the government to keep this stuff safe—hell, not even the government can trust the government to do so. And backdoors present an even more cut-and-dried case than exploits.
Even if an exploit or a backdoor is yours and yours alone for now, your monopoly is either a chimera, or it will be short-lived. And the consequences of spillover can be—as Jeff Goldblum learned the hard way—equally unpredictable and devastating. While WikiLeaks did not publish any malicious code this week, it did claim that the contents of Vault 7 have been circulating “among former U.S. government hackers and contractors in an unauthorized manner.”
What happens when a highly weaponized suite of hacking tools makes its way into the broader internet? I hope we are not about to find out—but if we are, I suspect that Comey and his colleagues at the FBI are unlikely to be happy with what they find. Here’s hoping the experience gives them pause the next time they ponder whether their solution to the threat of “absolute privacy” is really such a good one after all.
Image: Darin McCollister/Getty.