An example of the splash screen from the Petya malware that was suspected of relying on an exploit developed by the U.S. National Security Agency.
It’s well known that government is in the computer hacking business and, we would argue, that under the right circumstances, has legitimate reasons to stay in it. But government-sponsored hacking also poses real security risks. Without well-established, well-understood mechanisms for accountability and transparency, government hacking can undermine privacy, interfere with free expression, jeopardize critical infrastructure and hobble American tech companies that need to offer strong privacy and security solutions to their users to compete against other companies in the global marketplace.
Mechanisms that provide insight into why, when, and how the government hacks aren’t written into law yet. But legislation under consideration by Congress — the Protecting our Ability to Counter Hacking Act of 2017, also known as the PATCH Act —would be an important first step toward improving and clarifying the framework under which we allow our government to hack. (The act’s name derives from the fact that fixing security problems in software and networks typically requires software “patches” that modify computer code.)
Right now, the United States government uses a less formal interagency process to decide whether to withhold previously unknown software vulnerabilities that it discovers, (often known as zero days) so that it may use them for its own hacking purposes, or to disclose them to companies so they can be fixed. But, given that the security of millions, if not billions, of global Internet users is often at stake, it’s shocking that the public knows so little about how this decision-making process works. Big policy questions — like whether the government should develop and collect computer “vulnerabilities” for primarily offensive or defensive purposes — deserve a bigger public debate, with Congress, not just agency bureaucrats, setting the ground rules.
But what ground rules need to be in place, precisely, to ensure the government strikes the right balance between public safety, corporate needs, and national security requirements? Right now, the White House convenes what’s known as the “vulnerabilities equities process” (VEP) so that representatives from different agencies can discuss and then vote on whether, when and how to disclose particular vulnerabilities. The VEP was born in the last year of the George W. Bush administration in an attempt to weigh sometimes conflicting interests, and became standard operating procedure during the first years of the Obama administration.
The VEP operated completely off the public’s radar until the National Security Agency was accused of hoarding a vulnerability that was exploited by hackers. In responding to that accusation, President Barack Obama’s cybersecurity adviser, Michael Daniel, wrote a blog post that reflected his personal considerations as a VEP participant. The list itself was thoughtful and demonstrated a holistic view of Internet security – including not only national security or criminal investigation interests, but also the rights and interests of the Americans and companies who rely on the Internet every day to conduct business, go to school, bank, communicate with their loved ones and more.
Eventually, the Obama administration released an interagency memorandum of understanding that discussed the procedures governing the VEP. But it critically omitted mention of the substantive factors that drive the VEP’s process for deciding whether to disclose a vulnerability to the tech industry. To this day, that memo and the earlier blog post represent the government’s only meaningful description of the process, and the government is not bound to stick with that description.
The process that affects not only companies’ ability to deal with security issues, but also individual privacy and security interests and corporate security interests, should not operate in near complete secrecy and should not exist at all without statutory authorization from Congress. It is beyond time for a formalized, legal framework to address this problem.
A bipartisan coalition of federal lawmakers thinks the PATCH Act is the right answer. The legislation would enact into law a new, refined version of the VEP. The proposed act would codify the VEP’s existence, require the administration to write explicit and public rules to govern it, and list the equities (the interests that need to be weighed against one another) that should be on the table. The act would permit audits and require basic statistics to be released. It is not, however, so prescriptive that it would unfairly weigh the process toward either withholding or disclosing any particular vulnerability.
Some bureaucrats and policymakers reflexively oppose such congressional oversight. Their opposition usually lies in their seeming belief that there is little to weigh: that national-security hacking opportunities are always more important than even a healthy and vibrant Internet, or individuals’ and companies’ interests in digital security. But a reflexive, simplistic view of cybersecurity is actually the clearest justification to require a transparent and accountable interagency process. The Internet has so many complex and interconnected layers that VEP participants should be required by law to consider the ripple effects that failure to disclose a really importantly placed vulnerability can have.
The oversimplified question that has dominated the VEP debate so far has been whether, on balance, should the government disclose those security “vulnerabilities” to software vendors impacted by them, typically major companies like Microsoft or Apple or Cisco? It’s commonly believed that doing so would improve cybersecurity generally, while simultaneously hampering the government’s ability to exploit vulnerabilities and eavesdrop on “bad guys” like hostile foreign governments or criminal organizations. This probably overstates the case, since not every exploit has an immediate national security use, and hacking isn’t the only or even most commonly used tool by our agencies. At least, that’s what we believe, based on the nontransparent framework in place now; the PATCH Act would give the public better information to weigh competing claims about what the balances should be.
(We have some initial thoughts about balance on at least one topic — although some government actors want to invest in developing or discovering generalized ways of getting past digital security protections in software, we think it’s generally preferable for government, when authorized by law, to tailor a hack into a specific computer of a target instead of leaving a mass-use product like Word or Chrome unpatched and risking digital security on a broad basis.)
Of course, building a statutory framework for the VEP and what the government does with known vulnerabilities is just one piece of a bigger puzzle that includes how the government comes into possession of the vulnerabilities in the first place. Maybe you think the U.S. government simply buys technology on the open market in order to pierce the digital-security measures, as the FBI did last year to get access to an iPhone in the San Bernardino terrorist investigation. But that view isn’t accurate or complete. The fact is, several US agencies develop such technologies in-house or by contact, and others receive them from researchers – including ones operating on the black market – or US allies who pass them along.
Some propose getting rid of even the modest rules on the books and enthusiastically join the arms race with the Chinese, Russians and others in discovering and developing ways to bypass or sidestep (or breach) digital security tools, both on the Internet generally and in personal devices and technologies. We view that kind of Wild West approach as fundamentally irresponsible. It will remain so at least until Congress develops a legal framework that would allow us to make reasonable guesses about what the impact of such an arms race would be.
Obviously, how the government uses its hacking capabilities raises bigger questions than a legally-sanctioned VEP would address. We know those questions will continue to demand attention, but we think the enacting the PATCH Act would be a key first step in creating a policy framework to govern government hacking.
Image: Wikimedia Commons