Show sidebar

A Response to “The Tech”: Continuing the Vulnerability Equities Process Debate

In my recent Just Security piece, I argued that Aitel and Tait’s suggestions in Lawfare to focus the Vulnerability Equities Process (VEP) more narrowly on strategic intelligence concerns would neuter other important purposes the VEP serves. Aitel and Tait disagreed with me on Twitter and in a post Aitel independently authored.

As I stated in my piece, the VEP needs improvement. But needing improvement does not mean that the concept of a VEP that takes into account multiple equities is dangerous. Our legal tradition of protecting civil liberties supports oversight over government possession of zero-day vulnerabilities in a way that retains disclosure as a necessary and feasible outcome.

With that in mind, it’s worth exploring the substantive portions of our disagreement, identifying areas of agreement, and expanding on points others raised during on-and-offline discussion.

Points of Disagreement: Responses to Aitel’s Rebuttal

In his solo post and on Twitter, Aitel claims I did not understand the arguments made in the initial Lawfare piece. He states that he and Tait “simply claim that the VEP is a pure PR move that cannot hope to accomplish its stated goals and does harm while doing so.” But in their first piece, they did not simply claim that the VEP is a harmful PR stunt; they argued that the “VEP is, inherently, harmful to intelligence operators.” My response was aimed directly at their claims that the VEP is “inherently harmful” and that including considerations other than the interests of the intelligence community puts “strategic cyber security goals on a roulette wheel.” I argued that the US government must evaluate more than intelligence considerations in designing how it handles zero-day vulnerabilities. I clearly understood what they were arguing, but I disagree with their intelligence-focused argument because it ignores other equally important factors.

Let me illustrate my disagreement on this fundamental issue by returning to a point I made in my original post. Aitel and Tait argue that “law enforcement use of hacking as an investigative technique is an inevitable consequence of encryption,” including using zero-days. In other areas involving national security, policy and law subject law enforcement activities in the United States to more scrutiny than intelligence operations directed at targets overseas. This tradition should apply to law enforcement acquisition, possession, and deployment of vulnerabilities. 

Although law enforcement deployment of vulnerabilities may fall under existing oversight processes, scrutiny of law enforcement acquisition and possession of zero-day vulnerabilities is not currently well established. The government has not publicly released the full list of agencies participating in the VEP. Law enforcement should be subject to the existing VEP or its own VEP, and the chosen process should reflect the long-standing scrutiny of law enforcement activities. This scrutiny is grounded in the protection of civil rights and civil liberties of US persons. The values and interests behind this tradition cannot be located solely in what helps or hurts the intelligence community. Further, we should also ask whether law enforcement use of zero-day vulnerabilities is even appropriate, in the same way citizens are asking whether law enforcement possession and use of military-grade weapons and equipment is wise.

I am not alone in believing the VEP should take into account matters beyond intelligence activities. For example, the president’s review group suggested the government should only keep zero-days secret for “rare instances” of “high priority intelligence collection” – a suggestion clearly premised on the need to disclose vulnerabilities in most instances to benefit individuals and the private sector. Building these considerations into the VEP is and should be more than an empty PR gesture.

I also disagreed with Aitel and Tait’s suggestion that bulk disclosure of zero-day vulnerabilities might improve the VEP. Aitel claims they did not propose bulk disclosure. However, in their original piece, Aitel and Tait state that NSA disclosure of “bundles of hundreds of bugs to vendors at a time . . . might have more weight” in reducing the number of vulnerabilities in software and in encouraging vendors to improve code security. They argued such bulk disclosure is “a concept that might be adopted in better future processes.” That sounds like a favorable assessment of bulk disclosure, an assessment Aitel and Tait offer to policy makers for consideration in VEP reform. It was this proposed concept I critically analyzed in my post.

Points of Agreement: The Need for Greater Systemic Cybersecurity

Based on his responses, Tait and I agree that the VEP is not a silver-bullet solution for all cybersecurity ills. I have no problem pursuing broader cybersecurity solutions, as Tait advocates. However, seeking such solutions does not eliminate the need for adequate oversight of the US government’s acquisition, possession, and deployment of vulnerabilities.

Tait makes several suggestions for achieving broader computer security. He argues that pivoting the VEP to be about “defending against hacks” and “defending against 0-days” instead of about “mere disclosure” would be “a huge win for U.S. security,” in part because most computer security compromises do not come from exploiting zero-days. However, the VEP is meant to decide whether the government should keep or disclose vulnerabilities, so I would be intrigued to learn how Tait envisions it serving as a broader tool for US cyber defense, other than through the disclose-and-patch model. I am especially unclear how the VEP would achieve this aim without losing its primary oversight function. I am open to hearing more about these pivot suggestions, but broader cybersecurity solutions will likely come from processes other than the VEP.

I thank Tait for noting a specific point of agreement: the US needs to figure out ways to tackle the root problems that can interfere with company patching of disclosed vulnerabilities, including lack of resources or market strategy considerations. These areas are ripe for creative suggestions, and I would welcome additional thoughts from Tait on this point. We should be able to find solutions that provide incentives for better software security without instituting a full software security liability regime.

For instance, Chris Soghoian has suggested including policy actors such as the FTC, Department of Commerce, or NIST in the VEP because these entities might be more aware of the consequences of disclosure and nondisclosure for companies. These actors could better understand which bugs companies might be able or willing to patch, and could contribute that information to disclosure deliberations. Going a step further, these agencies could follow up with companies to assess whether patches have been made.

The US has and should continue to evaluate intelligence and law enforcement activities from more than just the perspective of these agencies. If the VEP seems like “just PR,” then we should strengthen it. Cybersecurity beyond the VEP is important – but working towards broader cybersecurity does not mean we have to abandon meaningful vulnerability oversight.

***

Correction to original post:

In my original post, I named a bug from Stuxnet to illustrate that bugs kept secret by governments can also be independently discovered and used for malicious purposes. The example I used was wrong. The bug I referenced, MS08-67, was likely introduced to Stuxnet after it was publicly known, a fact available in information about Stuxnet’s development. See Symantec’s “Stuxnet 0.5” report.

However, another bug in Stuxnet can be used to make the same point, and the original post has been corrected to reflect this fact. The LNK vulnerability, MS10-046, was likely introduced to Stuxnet by its creators around March 2010. In June 2010, security researchers publicly revealed that this bug was being exploited independently before a patch was available from Microsoft. Members of the information security community have also identified this bug as a problem. Thus, government secrecy about a bug allowed exploitation by other actors to begin or continue to the detriment of individuals. Moreover, the larger point my post made is valid. The possibility of mutual discovery by bad actors and subsequent harm to US persons warrants scrutiny of the government’s vulnerability discovery, acquisition, and deployment process.

Tags: , , , ,


About the Author

is a fellow at the Berkman Klein Center for Internet & Society at Harvard University studying the exercise of power in the Internet society. She focuses on Internet legislation in developing countries, grassroots protests against government surveillance, and international politics and law relating to surveillance technologies and practices. Follow her on Twitter, @mailynfidler.