What the White House Needs to Disclose about its Process for Revealing Cybersecurity Vulnerabilities

At a series of events earlier in October, White House Cybersecurity Coordinator Rob Joyce announced that he is preparing to release more information about the Vulnerabilities Equities Process (VEP). 

As we’ve discussed before, the VEP is a complicated yet important process that determines whether the government will notify a digital-technology company about a cybersecurity flaw in its product or service, or choose not to disclose the flaw and use it for later hacking or intelligence-gathering purposes. We argued that a legislative solution—not just a less formal interagency review—is needed to govern this high-stakes process, which will have repercussions for cybersecurity, privacy, access to information, and our economic competitiveness. We’ve also argued that much more information about this process should be released to the public. 

Joyce’s announcement of the White House’s planned voluntary release of information is a welcome development, and Joyce has said in the past that he is generally pleased with how the current interagency process works. He indicated in his statements earlier this month that the public should expect at least a “charter” (which we take to be a more formal statement of the principles that underlie the process), as well as some basic statistics about how the VEP has been applied up to now to disclose (or delay disclosing) vulnerabilities. 

Since the point of the release is to demonstrate the legitimacy and success of the program, we’ve compiled a “punch list” of the types of information we believe the White House should commit to share:  

  • Who participates in the VEP: An interagency memo released in response to a Freedom of Information Act request redacts the standing participants of the VEP. It appears that the National Security Agency and the Secret Service will likely be permanent members of the VEP, with other departments—including Defense, State, Justice and Homeland Security—participating when they have their own particular “equities” (that is, departmental interests in the outcome of the decision-making). But there is no justification for obscuring who the decision makers are. Pundits in D.C. often throw around the phrase “cabal” hyperbolically, but this is literally a “secret cabal.”
  • What vulnerabilities are submitted to the VEP Process: Here’s one reason we prefer a legislative framework rather than something less formal—a sweet provision in the Senate’s Intelligence Authorization Act mandates that VEP participants share the policies that govern when they submit vulnerabilities to the process. This can only improve our understanding of the VEP, and understanding that process must include when and how these software flaws are submitted for review in the first place. That different agencies have differing interpretations of their responsibilities may be a red flag if the differences are substantive and not merely procedural. Greater transparency on these points may also clarify if actual technically specified vulnerabilities are sent to the VEP, or if only operational, end-to-end exploits are what go through the process.
  • The factors considered by the participants: There are many equities at stake, including nongovernmental ones, and how they apply to any specific vulnerability can vary greatly depending on the circumstances. But it is important to confirm, based on the factors to be considered, that the process does, in fact, lean in favor of disclosure.
  • Basic statistics: The NSA has said that it discloses known vulnerabilities to the affected tech companies more than 90 percent of the time. Is the rate similar for other agencies? If the rates differ, why do they differ? And what are we talking about in absolute numbers?
  • Context for the numbers: We’ve learned the hard way that some publicly-released numbers can significantly obscure the impact of what is being reported. For example, we’ve learned in the surveillance context that a single court order was used to indiscriminately collect millions of phone records on a daily basis. The release should provide some sort of characterization of what is held. It is possible that a single exploit in a widely used product or critical infrastructure could impact many more devices and users than 100 exploits if they are in very specific systems, used by highly targeted groups.

Most of the disclosures suggested above should be mandated by law going forward. But the administration should also use this opportunity to take and share remedial steps to address some of the apparent problems with past VEP determinations. Here, we recommend the administration provide more answers about what may have gone wrong with the VEP process, given that, for example, Microsoft has said it wasn’t warned about certain NSA exploits before the “Shadow Brokers” disclosed them. We know from an April 2014 blog post by then-Obama Cybersecurity Advisor Michael Daniel that the VEP was reinvigorated in 2014 after sitting dormant for several years. Were these hacking tools disclosed by the “Brokers” compiled before the VEP restarted? If they were compiled after the VEP’s reboot, did they go through the process? How were the decisions to withhold these made? For example, there were many exploits – were they considered individually?

And finally—and we concede this may be a sensitive subject—the administration should address what steps will be taken to ensure that the tools are safe from leaks and hacks. The forthcoming release of VEP information has been pitched as a way to reassure the public, technology companies and Congress about the legitimacy of the process. We believe legitimate questions are being asked about the administration’s ability to protect these tools after they are obtained or developed. Addressing the widely known security incidents—even if admitting to a mistake in the process—will go a long way toward demonstrating that while the process may not be infallible, at least it aims to be responsible. 

Image: Trevor Paglen/Wikimedia Commons

 

About the Author(s)

Michelle Richardson

Deputy Director of the Center for Democracy and Technology’s Freedom, Security, and Technology Project

Mike Godwin

Distinguished Senior Fellow with the R Street Institute