Show sidebar

Looking back at 2016, A Status Check on Government Hacking

Last year, the ongoing encryption debate took a backseat to a steady drip of stories and developments related to government hackings. This set the stage for a set of policy and legal innovations that are critical but that now seem unlikely to occur. As a result, we may look back on 2016 as the year we legitimized government hacking without establishing safeguards to prevent its abuse.

Before talking about the implications of that fact, it is worth walking through some of the events of the last year and take stock of what we learned:

1. Apple iPhone Hack. In March, the Federal Bureau of Investigation (FBI) procured the means from a third party to independently hack into the iPhone of the San Bernardino attacker and to access the data on that phone. The FBI then withdrew its request compelling Apple to design a unique version of the iPhone operating system that would have allowed it to unlock the phone. The Bureau subsequently said that it would not be submitting the iPhone vulnerability to the Vulnerabilities Equities Process (VEP), the Executive branch process used to determined whether to disclose or exploit product vulnerabilities. When it procured the ability to unlock the phone, it had not obtained sufficient technical detail about the product vulnerability being exploited.

The Apple case served as a proof point for those who have argued law enforcement should exploit existing product vulnerabilities rather than seek an access mandate. It also called attention to the VEP and raised questions about how the government’s disclosure process should be applied to vulnerabilities procured from third parties. 

2. Playpen Criminal Cases. The ongoing series of prosecutions resulting from the FBI’s remote hacking operation of Playpen, a child pornography website, raised questions about the admissibility of evidence obtained through hacking and about whether vulnerabilities used in hacking operations need to be disclosed in order to allow defendants to confront the evidence against them. Many judges appeared to lack the mooring to know how to tackle these questions, reaching drastically different conclusions based on analysis of the same technology, expert testimony, and points of law. Some elected to throw out evidence. One found that a warrant wasn’t even necessary for this type of remote hacking operation. We have seen some convergence more recently, and I tend to agree with those who found that evidence specifically in these Playpen cases should be admissible without requiring court disclosure of the vulnerability.  

What we’ve learned more generally from these ongoing prosecutions is that answers to evidentiary questions will be very fact dependent. These questions do not appear likely to present a huge impediment to criminal prosecutions, but they will need to be adjudicated on an individual basis, and answers will vary depending on the specifics of the hacking technique. This means that in some cases vulnerabilities will need to be disclosed in court in order to protect a defendant’s rights, introducing a degree of unpredictability for law enforcement and possibly putting a product’s users in greater jeopardy.

3. Shadow Brokers Disclosures. In August, a group calling itself the Shadow Brokers publicly released a large cache of exploits allegedly obtained from the National Security Agency. Some of these exploits took advantage of zero-day vulnerabilities — flaws in software unknown to the vendor before they become public — in CISCO and Fortinet products, again highlighting the importance of the VEP.

Much of the debate about vulnerability disclosure focuses on independent discovery — that is the idea that if the U.S. government doesn’t disclose a vulnerability, it is leaving users at risk if the vulnerability is discovered and exploited by another party. But what the Shadow Brokers’ story showed us is that some of the risk inherent in government hacking stems from the management and exploitation of non-disclosed vulnerabilities, rather than from independent discovery. NSA somehow mismanaged its stock of vulnerabilities, resulting in an inadvertent disclosure to an adversary and a material risk to users. The same lesson can be taken from the recent public disclosure of a zero-day vulnerability used in a remote hacking operation against Tor.

4. Consensus in favor of VEP reform. The stories above, which all involved undisclosed product vulnerabilities known to the government, lent credence to calls to reform and codify the VEP. Over the course of 2016, we saw a broad consensus emerge in favor of those reforms, which in December found support in the Center for a New American Security’s  surveillance reform agenda. The House Encryption Working Group Year-End report, released last month, similarly called on Congress to explore formalizing the VEP.

Reform proposals are not without their detractors and the specifics of the VEP structure, ownership, and criteria are open points of discussion. But to many, reforming this process, which is intended to help the Executive make informed decisions that balance online and offline security, seems to make basic good-government sense.

5. Rule 41 Changes. After a last minute push for delay, changes to Federal Rule of Criminal Procedure 41 went into effect on December 1st, allowing law enforcement to obtain a warrant for remote hacking operations in cases where the location of the target is unknown. The changes eliminated a procedural prohibition against government hacking but left many big substantive concerns unaddressed. Questions remain about how warrants against anonymous targets can satisfy the Constitution’s particularity requirement and about how to deal with the situation where anonymous targets are found to be located outside of the United States.

Ideally, eliminating this procedural barrier would compel Congress and the Courts to grapple with these more serious concerns. And despite the acrimony that the rules changes created, you can actually now find a lot of agreement about common sense next steps needed to place the change on a more solid substantive foundation.  Check out here, here, and here for a blueprint to some of these steps.

The Year We Legitimized Government Hacking.  

Of course, other than the Apple story, none of this activity really dates to 2016. The actual Playpen operation took place in early 2015. Shadow Broker’s code dates back to 2013. Reforms to Rule 41 were proposed by the Department of Justice in 2013, and are intended to address law enforcement operations stretching back more than a decade. The activity here isn’t new or novel. What is new is the amount of public attention it received.

Government hacking has taken place for years, and I suspect it will take place with greater frequency in the future, so increased public scrutiny now is good thing. Moreover, the activity suggests that law enforcement is capable of adapting to some of the challenges to its operations presented by encryption. Government hacking, if not a perfect solution to those challenges, will at least provide a means to go after the most critical targets. Only a year ago, the notion that government hacking could provide an alternative to an access mandate seemed more hypothetical than reality. That is no longer the case today.

While stories about government hacking prompted a healthy policy debate, it is also important to note what they did not do – trigger major uproar. Compared to the industry response to the FBI’s efforts to compel Apple to weaken the security of its product, the response to its hack of that same product was relatively muted. In this respect, 2016 wasn’t just the year we came to better understand government hacking. It is the year that activity was legitimized. It is the year we accepted the premise that the government should under some circumstances hack its targets.

Looking Forward, Staring into the Policy and Legal Void.

On its face, 2016 seemed to deliver a lot of progress on this issue.  We saw the legitimization of activity that to my mind is inherently legitimate if subject to appropriate safeguards, increased public scrutiny of that activity, and emerging consensus about the laws and policies that should govern that activity.

But after legitimizing the activity and finding policy consensus, it is still the case that none of the necessary reforms are in place today. All we have to show for the debate thus far are changes to Rule 41, which actually tack in the opposite direction, eliminating procedural prohibitions against hacking without establishing any additional safeguards. Beyond that, we are depending on a fragile and opaque VEP as it currently exists. This begs the question: Are we worse off today compared to where we started last year?

My expectation as of a few months ago was that the emerging consensus in 2016 would find its way into actual law and into government processes this coming year. The election results have thrown those expectations into a blender, along with everything else. We may now be consigned to re-adjudicating previously settled matters of surveillance policy. New policies needed to ensure government hacking remains appropriate and safe now seem unlikely to be put into effect. The balancing of online and offline security required by the VEP is not what one is inclined to expect from the incoming Trump administration. And Congress is likely to have priorities other than creating safeguards to govern remote hacking carried out under revised rules of criminal procedure. This means that the serious risk created by this activity–risk to individuals as well as systemic risk to the country’s cybersecurity posture–will go unmitigated.

I have no positive note to end on here other than to say this may be a temporary pause. In short order, the new administration is going to have to own the country’s enduring cybersecurity crisis. Some of the policy questions I’ve discussed here are at the heart of that crisis; and the gravitational force of that crisis may push these issues back onto the agenda at some point. Despite any instincts to the contrary, the new administration may eventually be forced to side in favor of strong security for the software and hardware Americans use and to put mechanisms in place that prevent government hacking from adding to the systemic risk we already face. Or at least one can hope.

Image: Flickr

Tags:


About the Author

Head of Trust at Mozilla, Non-Residential Fellow at the Center for Internet and Society at Stanford Law School, Former Intelligence Specialist at the Congressional Research Service, Former Counterterrorism Analyst