Dave Aitel and Matt Tait’s recent post in Lawfare argued that the U.S. government’s procedure for deciding whether to withhold unknown or little-known vulnerabilities in software and hardware for use by the national security and law enforcement communities or to publicly disclose them for the benefit of broader cybersecurity – a procedure known as the Vulnerability Equities Process, or VEP – is inherently harmful to American intelligence operations.

Aitel and Tait’s analysis fails to recognize that the VEP, at its best, exists expressly to recognize more than just intelligence and national security strategy concerns. The process is meant to acknowledge that government use of these tools has the potential to affect the online security of everyone who lives in the U.S. and engages with its economy. The VEP needs improvement, especially greater transparency, but its original goal of balancing different perspectives beyond those of the intelligence community should remain central to reform efforts.

Aitel and Tait’s overarching argument is that “the VEP is, inherently, harmful to intelligence operators.” They note that Russia, Iran, and North Korea will remain unrestrained by oversight in this area. As such, the VEP will “always represent a strategic disadvantage against foreign adversaries” for the U.S. Especially because U.S. cyber capabilities “already face a greater level of scrutiny” than its competitors across the board, the VEP adds unnecessary additional constraints.

They also critique aspects of the VEP as it currently exists. Aitel and Tait argue that disclosing vulnerabilities to companies does not guarantee improved security of relevant products, because companies may possess other reasons not to patch, such as limited resources or desires to move customers to a newer product. The authors point out the risk of exposing intelligence sources and methods that accompanies vulnerability disclosure, especially if disclosure happens after U.S. intelligence services have exploited the vulnerability. They also suggest that disclosure essentially amounts to government agencies subsidizing the cybersecurity of privately developed products with taxpayer money, when this function is not the stated aim of these agencies.

These are certainly important factors to consider when assessing options for VEP reform. However, government possession and use of vulnerabilities deserves scrutiny in and of itself, regardless of disclosure outcomes for cybersecurity and government strategy. Reducing or doing away with oversight tools such as the VEP carries its own risks. The U.S. has a tradition, however imperfect, of calibrating both government needs for intelligence and protections for “U.S. persons” – generally defined as a citizen, permanent resident, or organization incorporated under U.S. law. Indeed, this tradition distinguishes us from our “foreign adversaries.”  For example, Executive Order (EO) 12333, although not without fault, establishes guidelines for collecting foreign intelligence that help maintain “proper balance between the acquisition of essential information and protection of individual interests.” Specifically, EO 12333 sets limits on intelligence community collection, retention, and dissemination of information concerning U.S. persons.

The NSA’s use of zero-days presumably does not target U.S. persons directly. However, the intelligence community’s failure to disclose zero days means flaws in possibly widely used computer systems go unfixed, which has the potential to affect U.S. persons. Aitel and Tait portray this risk as minimal, arguing that the skillset for finding and exploiting zero-days only exists within a highly confined circle, reducing the chances that a bug held by the U.S. would be discovered and used by other actors to the detriment of U.S. persons.

Although mutual discovery and exploitation may not happen often, it has happened, and with high-profile consequences. One of the vulnerabilities used in Stuxnet, the worm used to target Iranian nuclear facilities in which the U.S. government played a suspected role, was independently discovered and used to bad effect. The LNK vulnerability, MS10-046, was likely introduced to Stuxnet by its creators around March 2010. In June 2010, security researchers publicly revealed that this bug was being exploited independently before a patch was available from Microsoft.  Thus, government secrecy about a bug allowed exploitation by other actors to begin or continue to the detriment of individuals. The logic of EO 12333 and other intelligence oversight precedents suggest government zero-day use deserves heightened scrutiny for its domestic consequences, regardless of the resulting strategic limitations of this scrutiny.

Law Enforcement Use Heightens the Need for Scrutiny

Government use of zero-day vulnerabilities, however, also extends to the law enforcement community. The FBI uses zero-days and seems to have participated at least once in the VEP. Law enforcement’s use of zero-days during criminal investigations directly affects U.S. persons. When law enforcement assists the intelligence community in conducting surveillance of, for instance, agents of a foreign power located in the U.S., the Foreign Intelligence Surveillance Act (FISA), while imperfect, contains provisions mitigating collateral effects on U.S. persons. Again, U.S. laws have a tradition of factoring the interests of U.S. persons into such investigations and not blindly endorsing strategic objectives at the expense of U.S. persons.

As far as we know, intelligence and law enforcement currently seem to share the same VEP. If so, the VEP should adopt the higher standards that would apply to law enforcement use as default principles to ensure that zero-day use affecting U.S. persons is adequately scrutinized. Alternatively, I have suggested even higher scrutiny of law enforcement use of vulnerabilities (see chapter 21 of the forthcoming book, Cyber Insecurity). Separate VEP processes for law enforcement and intelligence use may allow better tailoring of oversight standards to different zero-day use contexts. Indeed, perhaps oversight of vulnerability use by law enforcement and/or intelligence agencies should be expanded beyond the executive branch, perhaps including an outside actor in a VEP-like process, just as other intelligence activities are subject to checks by other branches of government.

Analysis of Bulk Disclosure and Strategy-Based Disclosure

Aitel and Tait close their post with two policy recommendations for improving the VEP process. First, they suggest disclosing vulnerabilities to companies in bulk rather than individually, in hopes of giving companies more systemic security knowledge, incentivizing patching. Second, they recommend aligning the VEP principles with U.S. strategic objectives, retaining, for instance, bugs useful for unlocking suspected terrorist iPhones and disclosing bugs that are known to Chinese intelligence services.

Bulk disclosure is intended to incentivize companies to patch vulnerabilities. However, this option has several drawbacks. First, if bulk disclosure turns into bulk review – where vulnerabilities are reviewed in a batch instead of individually – this change could reduce the scrutiny applied to each vulnerability. Second, if Jason Healey’s research correctly estimates that the NSA holds only a few dozen vulnerabilities, there may not be enough vulnerabilities to qualify as “bulk.” Additionally, these few dozen vulnerabilities may be higher-value vulnerabilities, and higher-value bugs are often more powerful, deserving higher scrutiny. Last and most importantly, bulk disclosure would not solve the root issues stymying companies’ desires to patch, including a lack of resources or market strategy considerations. Tackling these root issues would be a better approach, perhaps including actors such as the FTC or NIST in the VEP process, who could work with companies to incentivize patching or otherwise mitigate negative effects of disclosure.

Basing VEP decisions primarily on strategic considerations also has drawbacks. First, Michael Daniel’s blog post suggests strategic considerations are already part of the VEP process. Second, his list of criteria used in VEP decisions also includes technical considerations, such as the risk the vulnerability imposes or its likelihood of outside discovery. These technical questions are, from a strategic perspective, important for the NSA to consider, and it would not make sense to sideline them.

Most importantly, focusing excessively on strategic considerations would essentially neuter the VEP process, allowing disclosure only in instances in which disclosure aligns with U.S. strategic objectives. Excluding considerations about the effects of zero-day use on U.S. persons would counter U.S. law’s stated commitment to balancing the interests of U.S. persons and of the U.S. government. We have and will continue to approach these decisions from more than just a strategic perspective.

Correction: 

An earlier version of this post incorrectly named a bug from Stuxnet to illustrate that bugs kept secret by governments can also be independently discovered and used for malicious purposes. The example I used was wrong. The bug I referenced, MS08-67, was likely introduced to Stuxnet after it was publicly known, a fact available in information about Stuxnet’s development. (See Symantec’s “Stuxnet 0.5” report.) However, the LNK vulnerability, MS10-046, was likely introduced to Stuxnet by its creators around March 2010 and can be used to make the same point. This post has been corrected to reflect this fact.