A great deal of ink has been spilled regarding the many security vulnerabilities in Zoom teleconferencing software that were discovered after hundreds of millions of people began using Zoom as a means of holding meetings, classroom discussions, yoga classes, and even funerals during the COVID-19 lockdown. And while Zoom took immediate measures to shore up the security on its platform, including hiring Facebook’s former chief of security, as well as a widely recognized leader in establishing bug bounty programs, these actions came years after security consultants found vulnerabilities that were serious enough to make cloud provider Dropbox reconsider the use of Zoom within their company, and New York City schools ban its use for remote learning.

Security vulnerabilities and other software flaws are not unique to Zoom. An entire industry has grown up around the fact that cybersecurity is a widespread problem with potentially serious legal, political, social, and economic costs. In their own defense, technology companies point out that commercial software is highly complex and consumer use cases aren’t always predictable ahead of time. And while this is true, there are bigger problems at work here, as well.

The “move fast and break things” approach to building software products, where being early to market is the chief goal, and shipping a product now often means putting off bug fixes until later releases, remains the dominant business philosophy among technology companies, even as we have come to realize the dangers of this approach as “software eats the world.” Further complicating matters is a commoditized technology industry that competes on price, so keeping margins to a bare minimum can mean security engineering is often the first thing to get cut from a project’s budget.

Building secure software, a Sisyphean task if there ever was one, is difficult, time-consuming, and requires levels of expertise not found in most software developers. In other words, it’s expensive. While some larger technology companies have spent years and dedicated large amounts of money to building software that is both useful and secure, many others, some small and some not-so-small, are still tied to the idea that building secure technology is too expensive, too time-consuming, and hinders their ability to bring products to market.

These decisions aren’t made in a vacuum. Companies are well aware that their technology products invariably have some number of unknown flaws, some of them serious, but many make the calculation that those vulnerabilities will somehow be found and patched without adding any extraordinary costs to themselves. Companies who take this approach  would also be quick to point out that even the companies that spend large amounts on security still have products with security holes, as well. And while this may be true, the difference between these competing approaches to cybersecurity is felt in the costs we all pay for a world built on insecure software.

Software vulnerabilities fall under the category of negative externalities, a term economists use to describe costs imposed on third parties by activities related to the production or consumption of a good or service. A canonical example of a negative externality is environmental pollution, which imposes costs—in multiple forms—on people who are external to the producer-consumer economic activity. Cybersecurity flaws become negative externalities when the general public, and not technology companies, are left to bear costs imposed by vulnerabilities in our products.

Why would companies choose to externalize costs related to the production or consumption of their products? Often, it’s because they can. Until the advent of comprehensive federal regulation of air and water pollution in the late 1960s and early 1970s, companies generally had no real limits on what they emitted from their factory’s smokestacks and drainpipes, so they didn’t have to spend money on expensive scrubbers or invest in cleaner manufacturing methods. Similarly, there are very few legal or regulatory limits on the cybersecurity of products today. If a company makes public claims about the security of their product that turn out to be false or misleading, the Federal Trade Commission will likely step in, but unless a company’s actions toward cybersecurity are especially egregious, they will likely not face paying many of the costs associated with those flaws based largely on the history of the software industry in this country. In the 1980s, as computers began to appear in people’s homes and not just in corporate offices, questions about regulation and liability for software flaws became outnumbered by concerns about stifling innovation, imposing costs that would stunt a nascent technology industry, and thus depriving consumers of useful products that would not be created under a cloud of potential liability or regulatory issues.

When companies are thus insulated from paying the full costs associated with the use of their products, the problem of moral hazard can arise. Another term from economics literature, moral hazard describes the incentive to take greater risks when one is shielded from or insured against the costs associated with those risks. Thus, in the case of Zoom, because there were no external pressures to review their security practices, there would have been little incentive to spend additional money on cybersecurity efforts. This is the case for most technology companies. Because we allow these companies to externalize the costs associated with securing their products, we also shield them from the consequences of the additional risks taken by poor cybersecurity practices. But those costs eventually come due, and an increasingly networked world is left holding the bill.

Defenders of Zoom make the argument that post facto market pressures from users—like those currently faced by Zoom—provide sufficient incentives for companies to fix their products through patching existing releases or pushing out new releases to their customers. While this is true, and software updates are a necessary part of any working cybersecurity system, these market pressures alone are generally not sufficient, as they still leave many of the costs associated with externalized security in the laps of others.

Minimizing the cybersecurity moral hazard problem means putting ex ante incentives in place to help push technology companies to take responsibility for the costs associated with vulnerable software. Possible approaches include increased regulation, tort liability, or statutory requirements for reasonable security measures based on best practices. All of these approaches raise legitimate questions, and the details require a national (and perhaps international) good faith discussion on the merits of the various methods available, and at least in some areas, these discussions have begun to take place, despite some slow-rolling by industry players. Granted, cybersecurity incentives will not eliminate software vulnerabilities. The problem of software bugs will be with us for the foreseeable future. But pushing technology companies to proactively invest in security practices will go a long way toward a more comprehensive, equitable approach to cybersecurity.

Image: Photo by Robert Nickelsberg/Getty Images