The Biden administration’s National Cybersecurity Strategy, released earlier this year, calls for shifting liability for insecure software, via legislation and agency action, onto software producers that fail to take “reasonable precautions.” It would impose the cost of security flaws onto the party best-positioned to avoid them while rejecting industry’s attempt to shift liability downstream. While not without critics, this proposal received a surprisingly muted reaction from software industry trade groups, potentially suggesting acquiescence to some form of software security liability.

Security bugs in software—those vulnerabilities in software that an attacker can exploit—have historically had special legal status: security bugs only rarely form the basis of a liability lawsuit. But the peculiar status of security bugs is about to be cross-examined via public debate and, unfortunately, there are some potential bugs in the standard discourse on the topic. First, a lack of scientific evidence on the effectiveness of different software security measures means this debate can be too easily hijacked by parties unconcerned with the public interest. Second, there exists the danger of an excessive focus on only eliminating known vulnerabilities in shipped software.

Liability by Any Name

There are many reasons why software has largely escaped liability to date. For one, almost all software licenses, those pages of text that users mindlessly scroll through, disclaim the producer’s liability. In addition, there exists no widely recognized legal standard that demarcates secure from insecure software development practices; what liability suits have been brought, including those against D-Link or Cisco, rested on shaky, ad hoc reasoning. Moreover, harms from software aren’t always obvious or measurable. Taken together, in legal terms, this means that software manufacturers have no clearcut duty to fix bugs; there is no established standard of care to abide by; and users can’t always perceive, and the court isn’t always ready to recognize, that any harm occurred. The software industry has thrived under the status quo so far.

Many, these authors included, have long called for some liability on irresponsible software developers. Software consumers ought to have recourse when, for instance, a software producer ships malicious software updates (think SolarWinds) or when a software producer includes components with known vulnerabilities (think log4shell). These insecure practices can lead to lost data (including highly sensitive personal or national security information), system downtime, and ultimately harm to business and consumers. In some cases, insecure software can even deprive communities of necessary resources (think Colonial Pipeline) or cause real physical harm (think deaths caused by cyber attacks on hospitals). While courts disagree, the harm of insecure software is concrete and far from anomalous. 

But the question remains: what form should software security liability take? While there is a spectrum of options, the two most often-debated are general negligence and strict liability, both of which can apply to product liability. A negligence standard holds a company accountable if it did something unreasonable that caused the harm. A strict liability standard holds a company accountable for causing the harm regardless of how reasonable their behavior was. 

Many seem to agree that a negligence standard, rather than strict liability, should govern a software security regime. This standard argument holds that since perfectly secure software is impossible, a strict liability standard, which holds the software producer liable no matter their diligence, is misguided. Instead, a software producer ought to implement a series of sensible security practices that, if followed, mean the software producer faces no liability should a security bug produce harm. This differs from a strict liability regime in which a company would be as responsible for bugs they couldn’t prevent as bugs they could have. In other words, proponents of a negligence standard argue that because code will never be bug free, a company should not then be punished for bugs that got past otherwise reasonable security measures.

Of course, errors are always inevitable. The decision to impose strict liability does not presume the possibility of error-free products. Even in car manufacturing, an industry subject to a strict liability regime, errors occur. There are two factors inherent in the decision to impose strict liability on car manufacturers: (1) it is in society’s best interest to shift the burden of defective car parts onto the least cost avoider and (2) the car manufacturer is the least cost avoider. As with cars, users often do not have insight into or control over their software and so the company, not the user, is best-positioned to terminate bugs. 

However, that does not answer the question of whether it is in society’s best interest to shift the burden of buggy code entirely onto companies. Cars and software are meaningfully different industries. Cars are, for the most part, standardized in the components they use and functionality they provide. Software, on the other hand, is dynamic and heterogeneous. There are relatively few car manufacturers, all of which are deep-pocketed; there are countless software developers, many of whom are resource-strapped. 

While strict liability may have merit for specific subsections of the software industry (say, medical devices), imposing it on the entire world of developers would be heavy handed, to say the least. A sophisticated and established medical device development company, for instance, is (1) abundantly resourced and (2) building a product that all agree must adhere to the highest possible standard of quality. Conversely, a novice but talented video game developer is often substantially less-resourced and building a product for which the stakes of malfunction are not as high. Threatening that developer with instant business-sinking liability for any bug would deter new entrants from the marketplace and deprive the public of a potent form of enjoyment that benefits uniquely from creative freedom and the introduction of new artists.

The promise of negligence is the ability to take context into account: negligence is the “case-by-case basis” legal regime. That said, the question remains: what does “reasonable security measures” mean? What is the appropriate standard of care? Answering this question has proven challenging because negligence’s malleable, context-specific nature is both a feature and a bug. 

It is a feature because it ensures lawmakers can tailor legal liability to fit a specific situation, giving society more control over who to hold accountable and for what. For example, contrary to some suggestions, the negligence standard neither demands nor is limited to the mitigation of known risks. In some cases, negligence allows for known risks to go unmitigated, as long as users are warned of them. In other cases, negligence demands companies affirmatively search for and then mitigate previously unknown risks. The analysis depends on the company, the user, the product, the use-case, and other context-specific facts.  

This highly flexible, context-specific approach is also a bug because negligence demands evidence-based substance to give shape to the reasonableness standard. Without objective empirical evidence of what constitutes a reasonable security measure in various specific contexts, the value of a case-by-case approach is all but lost. And not just that, it leaves us exposed to the whims of companies, who will introduce their own empirical evidence in pursuit of a negligence standard that best serves their needs. 

The Political Difficulties of Defining a Duty of Care

A common refrain among technical experts when discussing software security and liability is that any standards should be “technically grounded.” Naturally, citizens should expect their representatives to make judgments on liability, and especially a duty of care, that are derived from research, analysis, and technical reality. But the fact of the matter is: there is a dearth of evidence connecting specific software security measures to improved security outcomes. In its absence, a negligence standard, and the public, are vulnerable to misguided definitions of what reasonable security looks like, such as the myopic focus on shipping software without known vulnerabilities observed today, and industry attempts to hijack the regulatory process in ways that advance their commercial interests instead of the public interest through political log-rolling and backroom politicking.

There is no single or set of software security practices that are “known” to result in less harm to downstream consumers. The type of evidence now so common in the medical world—such as randomized trials, systematic reviews, and other standard scientific practices—have come to the world of software security slowly, despite the best efforts of many researchers. For instance, so-called “DeWitt” clauses stifle the ability of researchers to benchmark the effectiveness of software security tools, preventing the accumulation of knowledge that should ideally underpin a software security liability regime.

Rather than building a liability regime based on solid scientific evidence, it is possible that software security companies and major software providers will corrupt the process. Software security vendors could push for standards that require producers to buy their security products. It’s also possible that major technology vendors will embrace onerous security standards as a way to build a moat, entrenching their market advantage while potentially hurting market competition and consumers.

While these processes will likely occur whether scientific evidence exists or not, those who want an outcome based on scientific evidence stand a better chance if this scientific basis exists in the first place. Right now, we see industry shape security standards by offering trial experts, funded by company-defendants, during data breach or other cyber-related cases to offer allegedly unbiased opinions on security standards. These experts often cite research, also allegedly unbiased, done by forensics companies, hired by those company-defendants in the aftermath of a cyber incident. That these opinions can be biased in favor of the company-defendant is self-evident, and the very reason the adversarial judicial process exists in the first place. For the adversarial process to work, however, there must be sufficiently-resourced, truly independent security research done on the public’s behalf. Major foundations and government funding agencies ought to create dedicated funding to cultivate this specific line of research. Early examples of this type of research already exist.

Beyond Known Security Vulnerabilities

A standard is only as effective as the substance given to it. Without ample evidence of truly effective security practices, the negligence regime has already fallen prey to purported “best practices” devoid of grounding in technical reality. One such practice sets the bar for security at software free of known vulnerabilities. A favorite among technocrats, this “standard of care” is both could unlikely to actually improve security and more likely to encourage willful blindness – shipping unvetted code to avoid finding, and therefore being responsible for, vulnerabilities. 

Past commentary on security practices and software liability often treats reducing, or even eliminating, known software vulnerabilities in software products as a starting point. Known vulnerabilities are documented weaknesses in code that could potentially be exploited by an attacker. As of April 2023, there are over 200,000 publicly known software vulnerabilities, including vulnerabilities in finished software products and also open source software components—those building blocks regularly integrated into consumer-facing software applications. Software producers often ship applications with components that have known vulnerabilities; one analysis suggests that over 80 percent of commercial codebases contain a known vulnerability and nearly 50 percent high-risk vulnerabilities.

Should liability related to known vulnerabilities become law, what about software producers using little-known, essentially unvetted third party software components? Because some empirical software research indicates software vulnerabilities tend to be discovered in software components that are popular, it’s possible for software producers to rely on lesser-used components that have no known vulnerabilities yet have received little or no security scrutiny. This complies with the letter of a “no known vulnerabilities regime” but arguably not the spirit of secure software development.

One recent suggestion is that companies should “ideally” apply the same scrutiny given to their own proprietary code to the open source code on which a company’s business depends. From the perspective of a software developer, the idea of applying the same scrutiny to third party code that a developer applies to their own code sounds sensible but is, in reality, preposterous. For example, a recent 1,000 line software application to store vulnerability data related to popular software projects used an open source software project (pandas) that includes over 1,000,000 lines of code. A security review of this code would have turned this simple project into a major undertaking. Would society be best served by demanding all software developers anywhere in the supply chain be equally responsible for mitigating known vulnerabilities when it might mean hobbling resource-strapped, innovative projects?

Under a no-known-vulnerability standard, the developer of a popular component might fail to identify a vulnerability, though it may have been easy for them to do so, and ship their product containing it. Countless other projects would opt to use that otherwise highly valuable component and grow to rely on it before the vulnerability is finally discovered. Then, those projects would each be responsible for the security defect in their work, often embedded deep in layers of code, each absorbing the cost of fixing this bug on their own (often costing them more than it would have cost the original developer to find and fix the bug in the first place). All because the standard focused on known vulnerabilities instead of unknown, but easily findable and fixable, vulnerabilities. 

This backdoor is one example of how a myopic focus on industry-wide best practices, such as shipping software free of known vulnerabilities, not only fails to take advantage of negligence’s ability to tailor context-specific standards of care but also ignores the practical realities of software development. 

Conclusion

The Biden administration’s proposals for software liability have brought this debate greater attention. This is a welcome development. But it does not follow that all current proposals, or even future ones, have ironed out all the potential bugs in a negligence regime for software. Of course, just as all software code contains bugs, so does legal code. Some may argue that legal code, like software code, can never be bug free. But figuring out how many and what kind of bugs we are willing to tolerate in legal code and in software code is a debate worth having.

IMAGE: Cybersecurity is one of the elements of proper functioning companies and workplaces. (Atech Support via Flickr)