Bugs, Bounties, and Blowback

Last week news broke of a major software bug—now termed “Shellshock”—in open-source software used in Linux and UNIX operating systems. Security experts have warned that the vulnerability is particularly troubling because it allows attackers to take control of machines running the software and because there may be half a billion such devices. Soon after the disclosure of the bug and patches aimed to protect against it, the blowback started: security firms began to report seeing attempts to exploit the vulnerability (see here and here).

Despite the risk of harm to the public from Shellshock and other such vulnerabilities, governments play only a limited role in responding to these sorts of threats. This post offers an overview of the informal, non-governmental system that has developed to discover, disclose, and patch vulnerabilities and some thoughts on the US government’s role (including as a vulnerability-customer) in the software bug ecosystem.

The limited role of government and law in combating software bugs is striking. National Computer Emergency Readiness Teams (CERTs) assist in disseminating information about vulnerabilities, as in, for example, US-CERT’s initial alert on Shellshock. But apart from the CERTs, efforts to address vulnerabilities are mostly non-governmental. Take Shellshock as a case study: Shellshock affects open-source software that an individual maintained “as an unpaid hobby,” it was discovered by an IT manager “in his personal time,” and companies whose software is affected have scrambled to develop and release patches (e.g., Apple). On the one hand, the development of any system of responsible disclosure and patching is somewhat remarkable; on the other hand, the system is imperfect and, as exploitation in the wake of Shellshock’s disclosure makes clear, it does not fully protect consumers. It also has to contend with a well-developed black market where sellers can obtain high prices for zero-day vulnerabilities, as detailed in a recent RAND report (especially pages 25-28).

The informal, privately organized system governing vulnerability disclosure and patching has two main components:

  1. Responsible Disclosure: Software developers and the white-hat community use a system termed “responsible disclosure,” whereby those who discover vulnerabilities notify the software developer of the vulnerability and allow time for a patch to be developed before publicizing the vulnerability. This system eliminates the harm that could result if a vulnerability were disclosed and exploited in the time it took to develop a patch.
  2. Bug Bounties: To encourage research and responsible disclosure of flaws in their software and to compete with the monetary incentives available on the black market, some companies offer bug bounties—rewards to those who discover and disclose vulnerabilities in the companies’ software. Facebook’s program is outlined here, and Google’s program for Chrome is described here. Google has paid out “more than $1.25 million” in bug rewards, and the website bugcrowd.com lists more than 75 companies that offer monetary bug rewards and many others that offer swag.

Although these systems have likely decreased risk to consumers, they are imperfect. As the response to Shellshock has shown, even when bugs are disclosed to software developers in a responsible way, consumers are not instantly protected because some systems are not immediately patched, and patches can be flawed, as reports indicate some initial Shellshock patches were. Moreover, disclosure of the vulnerability creates blowback: the disclosure provides notice of the vulnerability’s existence to potential attackers who can then focus on exploiting the vulnerability in unpatched systems. Finally, zero-day vulnerabilities can fetch 10-100 times higher prices on the black market than they do in bug bounty programs, according to RAND. Although some companies have increased their bounty rewards—Google this week tripled its maximum award to $15,000—the bounty programs are not going to put the black market out of business.

Where is the US government in this system? The government is a bug hunter, not a park ranger. In other words, the government is not policing or regulating bug hunting by private parties; it’s searching for and reportedly purchasing vulnerabilities to use for, inter alia, intelligence collection. In response to the Heartbleed vulnerability disclosed in April, White House Cybersecurity Coordinator Michael Daniel published a post entitled, “Heartbleed: Understanding When We Disclose Cyber Vulnerabilities.” Daniel explained that “in the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest” and “disclosing vulnerabilities usually makes sense.” But he also made clear that an interagency process is used to determine whether to disclose particular vulnerabilities or whether to “withhold[] knowledge of some vulnerabilities for a limited time” for intelligence collection or other purposes.

For all the software improvements that it may spark, the informal bug bounty and responsible disclosure system can’t restrain governments from developing, obtaining, or exploiting bugs. Even if companies could offer sufficiently large bounties to corner the market (which is doubtful), they can’t force or incentivize disclosures from actors, like governments, that aren’t motivated by money. And in all likelihood, companies can’t outbid governments bent on acquiring zero-day vulnerabilities.

For now at least, there’s no bug regulator, only bug buyers and bug sellers. And the bug market is booming. 

About the Author(s)

Kristen Eichensehr

Assistant Professor at UCLA School of Law, Affiliate Scholar at Stanford Law School's Center for Internet and Society, Former Special Assistant to the Legal Adviser of the U.S. Department of State Follow her on Twitter (@K_Eichensehr).