This post is the latest installment of our “Monday Reflections” feature, in which a different Just Security editor examines the big stories from the previous week or looks ahead to key developments on the horizon.
We’re on the cusp of Just Security’s birthday bash—which, because we’re huge nerds, takes the form of a spirited debate over encryption backdoors & the putative “going dark” problem—but our friends at the Washington Post couldn’t wait. They giftwrapped a crypto story for us last week, tied with a pretty bow of a headline: “Obama faces growing momentum to support widespread encryption.” As Ellen Nakashima and Andrea Peterson explain:
[O]ver the summer, momentum has grown among officials in the commerce, diplomatic, trade and technology agencies for a statement from the president “strongly disavowing” a legislative mandate and supporting widespread encryption, according to senior officials and documents obtained by The Washington Post.
Those documents included a draft memo by National Security Council staff indicating that, in the face of overwhelming opposition from security experts no less than civil libertarians, the administration has effectively stricken compulsory backdoors from its list of options for dealing with the increasingly pervasive use of strong encryption. That’s welcome news for those of us who thought we’d settled this dispute correctly back in the Crypto Wars of the 1990s, but I suspect for many in the intelligence and law enforcement communities, it seems like a case of mystifyingly dogmatic intransigence from the technology sector. Even more than usual, this feels like a debate in which the opposed sides stare at each other in bafflement across a chasm of incomprehension. In hopes of preparing myself to move the ball forward some later today, I’d like to briefly sketch why I think that might be.
Answering Different Questions: One strong impression I get from my conversations with intelligence folks is that—even if they’re too polite to say it in so many words—they ultimately think technologists’ claims about the security risks of backdoors are a disingenuous cover for an ideological privacy absolutism, the policy equivalent of the office IT guy who says “that’s not possible” when he means “It would be a huge hassle, and I don’t feel like it.” One reason they might think this, beyond the statistically astonishing preponderance of goatees and dreadlocks among professional cryptographers, is that their own tech experts are assuring them that it is too possible to create cryptographic “golden keys” that ensure backdoors only open for the good guys.
As I wrote over at the Cato blog back in February, they’re hearing this because there’s a very narrow sense in which it’s true. On a chalkboard, you can indeed demonstrate that, at least in certain contexts, any number of escrow or “split key” schemes are as mathematically robust against attack as a conventional cryptographic algortithm with one user-generated key, provided the “backdoor keys” themselves are secure. They therefore conclude that the long-haired geeks are spinning to suit their ideological preferences. In reality, the security folks are answering a different question, because any competent security expert will approach the question, not in terms of any component in isolation, but in the context of the cryptosystem as a whole—or rather, an ecosystem of cryptosystems. A backdoored system might provide equivalent security if we stipulate that nothing else goes wrong—that no other component in the security architecture fails, and the backdoor is perfectly implemented. But a well designed security system doesn’t make that sort of assumption: It seeks to make each level of security robust even in the event of failure at other levels. Most cryptographers will tell you that getting key management right is the hardest part of secure communications—and it’s precisely this problem that any kind of escrow system radically exacerbates. Moreover, in the real world, buggy software is ubiquitous, and any mandate that radically increases the complexity of cryptosystems makes those bugs vastly more likely. And, because the code in question now includes functions intentionally designed to facilitate surreptitious recovery of user data, those bugs are much more likely to have dire security implications. Remember, the question isn’t whether Google and the NSA could come up with some tolerably secure backdoor solution, but whether thousands of platforms and app developers, pushing out updates at a dizzying pace, can come up with solutions that work securely for an alphabet soup of federal, state, and local law enforcement agencies. Security proofs that hold in a frictionless whiteboard world simply aren’t applicable.
You’re Not Thinking Fourth Dimensionally, Marty!: Another reason for suspicion comes from simple observation of current practice. Whether we look to public-facing services or internal corporate networks, “backdoor” access to user information is notoriously common—and typically not for any purpose so weighty as thwarting ISIS, but because companies like having lots of data to monetize. You want to see the cryptographers’ dystopian scenario? Look out the window! It’s not so bad, is it?
Many, of course, would reply that it is indeed that bad—and bound to get worse as the frequency and sophistication of cyberattacks mounts. If you take the status quo as a benchmark, it may be hard to see how a backdoor mandate makes things appreciably worse. But locking in the status quo is a bad idea when you’re in the midst of an arms race. Vulnerabilities that are tolerable at one stage of play are unacceptable later in the game: Failing to get better is getting worse in that context. What we need, in short, is much wider deployment of much stronger encryption, fast—and tethering developers to a mandate that seems little worse than current practice makes it vastly more difficult to adapt rapidly to changing needs. Technologists have stressed, for example, how backdoor functionality makes it vastly more complex and difficult to implement perfect forward secrecy. It might be tempting to counter that most communications systems don’t currently provide PFS, or didn’t until relatively recently—but it would also be shortsighted: Security features that were once optional are increasingly a necessity. More generally, we can’t foresee the range of adaptations that will seem necessary to robust security in coming years, but it’s a safe bet that none of them will be any easier to deploy if they have to be retroactively compatible with a lawful access architecture developed in 2015. (Some of the most serious cryptographic vulnerabilites we see now, of course, arise precisely from the need to keep new code compatible with older, less secure software still in wide use.)
Tyranny of the Inbox: I first heard one of my favorite wonky phrases, “tyranny of the inbox,” in conversation with a (now-former) senior intelligence official on this very topic a few years back. I made a thumbnail sketch of the argument against backdoor mandates, and the official conceded that the idea didn’t seem all that well thought out. It was, he suggested, a case of “tyranny of the inbox”: Well-intentioned and impossibly harried lawyers, analysts, and investigators looking for the most direct way to deal with an immediate problem. Can’t read your target’s messages? Pass a law to make them readable! Longer-term concerns about software ecosystems or geopolitics understandably seem airily abstract when you’re trying to figure out whether a bomb might be going off in an urban center next week. And that’s fair enough—their job is to worry about the latter, not the former. But it does create a tacit near-term bias that I think gives rise to the other two misunderstandings. It’s perfectly appropriate for law enforcement officials to have that perspective, but it’s also generally the wrong perspective from which to approach policy that, once implemented, is likely to remain locked in place years into the future.