Understanding Cyber Market Failures

Published by The Lawfare Institute
in Cooperation With
When the United States government spots a market failure and decides to prevent a corporate merger or to break up an existing company, it generally proceeds only after doing legal and economic homework. To tackle a similar problem—regulating cybersecurity to fix perceived market failures—there is still substantial homework to be done.
The Biden administration’s National Cybersecurity Strategy (NCS), in a substantial departure from 25 years of presidential policy, asserted that the market has failed and that “regulation can level the playing field, enabling healthy competition without sacrificing cybersecurity or operational resilience.”
But as Harry Coker, the then-national cyber director, complained at a cyber-regulation conference, “there is not a lot of literature or a good understanding of the gap between cyber risk that is in a business’s self-interest to mitigate, and the risk that is in society’s interest to mitigate.” The literature on cyber market failures is thinly developed, mostly consisting of anecdotes and examples: interesting and illustrative but insufficient to support the weight of policymakers’ expectations.
Any time the U.S. government wants to regulate, it does so only because of a market failure. “Even when markets are efficient,” explains Joseph Sitglitz, “they may fail to deliver socially desirable outcomes …. Governments impose regulations to prevent [socially unjust and unacceptable] exploitation and to pursue a number of other social goals.”
This article advances policymaking by more fully developing four key cyber market failures— information asymmetries, negative externalities, market power, and public goods. It also offers policy recommendations. Table 1 describes the four types of market failure and offers examples of each type of failure and mitigation strategies.

Information Asymmetry and Inadequacies
Information asymmetry and inadequacies are the uneven distribution of information between parties. When a buyer and a seller, or a government regulator and a regulated entity, do not have the same information about a given situation, one or both may make suboptimal choices, leading to a market failure. While most previous research examines information asymmetry as a single issue, our analysis disaggregates failures affecting consumers, investors, and government regulators and security and resilience agencies.
Information Asymmetry Affecting Consumers
George Akerlof’s “The Market for Lemons” is a commonly cited example of information asymmetries impacting consumers. In the used car market, sellers know the true condition of the car while buyers know only what they can see and what they are told, creating an information asymmetry.
Applied to cybersecurity, information asymmetries can be a failure of either information technology (IT) or cybersecurity markets. There is no easy way for a customer—even, for example, a security-sophisticated chief information security officer—to understand if an IT product is secure. This is true down to individual consumers looking to buy a smart car, connected fridge, or home router. There is no way to determine if the software or device is built with security in mind, and the consumer, unable to differentiate based on security, selects on price. Producers do not bother to include robust security features, since consumers will not pay for them.
Governments often fix information asymmetries disfavoring consumers through increased transparency: passing data-breach notification laws; punishing false security claims; and mandating software bills of materials and consumer labeling, as is the case in the United Kingdom, Singapore, and United States. If consumers can choose between the less and more secure products, they can make informed choices, hopefully leading to less severe market failures.
Information Asymmetry Affecting Investors
Consumers are not the only parties affected by information asymmetry. Modern securities regulations were founded on reducing the mismatch of information between companies and their investors, who deserve to know the true state of the company’s finances and opportunities. This is the goal of substantial regulation by the Securities and Exchange Commission (SEC) and others and is as true for a company’s cyber posture and risks as for more traditional concerns.
In 2023, the SEC “adopted final rules that will require public companies to disclose both material cybersecurity incidents they experience and, on an annual basis, material information regarding their cybersecurity risk management, strategy, and governance.” The goal is for investors to have “timely, consistent, and comparable information about an important set of risks that can cause significant losses to public companies and their investors.”
Information Asymmetry Affecting Government: Regulators and National Security
Governments face two additional information asymmetries in cybersecurity. They suffer from a lack of information not only about how to properly regulate but also about how to understand the threat or impact of incidents about which they have not been informed. Regulated entities know far more about their own business processes and systems than any regulators. To try to overcome this mismatch, financial regulators embed teams of inspectors at the largest banks to monitor the risks, controls, and compliance. Even then, however, failures still happen.
Similarly, law enforcement, national security, and resilience agencies want to understand threats to the nation from foreign powers and the potential impact of cyber incidents. For example, after its operations were completely disrupted by a ransomware attack, leading to emergency declarations in 17 states, Colonial Pipeline “did not provide the administration with technical information about the breach until … five days after it was reported.”
As with other types of asymmetries, transparency is the main fix, in this case by mandatory reporting of cyber incidents. In the United States, the large-scale push for transparency is the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), which requires reporting of incidents within three days. Other regulators at the federal and state levels of the United States, in addition to the European Union, India, and other governments, have similar rules.
Negative Externalities
Negative externalities also impact the cybersecurity domain. An externality is the side effect of producing or consuming a good or service. A classic economic example of a negative externality is pollution. When a factory creates pollution, the downstream effects of the product’s creation, such as damage to natural resources, are not factored into the product’s final cost to consumers.
Our research indicates two further subcategorizations of externalities in the cybersecurity and IT markets: centralized or distributed.
Centralized Responsibility
Some negative externalities originate from a single organization or firm. When Colonial Pipeline’s operations were disrupted by ransomware in 2021, the impacts cascaded into a national emergency as millions of U.S. energy customers experienced disruptions. Colonial Pipeline did not implement sufficient security measures so that, when attacked, unintended costs were forced onto many others—including people who were not direct customers.
Distributed Responsibility
Other negative externalities are caused not by a single, central action but by the actions of a larger group—the cumulative impact of many small safety failures.
In October 2016, the malware Mirai infected Internet of Things (IoT) devices such as webcams and turned them into hundreds of thousands of remotely controlled “zombies.” Teenage hackers used this Mirai botnet to disrupt large sections of the internet in North America. If the owners of those devices had changed their default passwords, or if IoT device producers had baked security into their devices, they may not have been compromised; the costs of that inaction were imposed on those who suffered from the internet disruption, not just those purchasing the IoT products.
Solutions
Governments can mitigate negative externalities through incentives and liabilities. Power utilities often have outdated equipment that can be disrupted, causing outages. Accordingly, the U.S. Federal Energy Regulatory Commission introduced an effort to incentivize firms to invest in more secure technology, amortizing the cost of those investments over five years.
Governments can also enact laws or policies that establish liability for companies selling software that is negligently insecure or that they do not regularly or sufficiently patch. When individuals use software, they typically waive the software company’s liability through lengthy terms of service. Without liability, software developers face no consequences for externalities caused by their products.
Market Power
Market power is when one or more companies with disproportionate power over a market, through barriers to entry or concentrations of power, can influence the prices of goods and services. Such monopolies and oligopolies can lead to higher prices and severe market inefficiencies. Sometimes such concentration is considered economically or socially beneficial, such as for national champion airlines or power or telecommunications companies.
Some tech companies have massive market power. For example, in March 2023, then-chair of the U.S. Federal Trade Commission Brendan Carr expressed concern that “swathes of the economy now seem reliant on a small number of cloud computing providers.” This has both beneficial and harmful effects. On the one hand, companies like Microsoft and Google can implement security at a planetary scale. When they improve security, they protect billions of devices and people at once.
On the other hand, they can also result in technology monocultures, where nearly every computer has close to the same characteristics. Attackers can succeed at scale because they “do not have to guess much about the target computers because nearly all computers have the same vulnerabilities.” Further, insufficient competition may lead to uncompetitive pricing and little incentive to improve security.
To solve market power failures, an obvious choice is antitrust regulation, which “prohibits anti-competitive conduct and mergers.” However, there are many other solutions, such as EU efforts mandating that cloud service providers assist customers looking to switch providers. Governments can also embrace and encourage alternatives, especially open-source software, to reduce market concentrations.
Free Riding and the National Security Gap
Public goods are non-rivalrous and non-excludable; one individual’s consumption does not diminish another’s ability to consume the same good, and individuals cannot be prevented from consuming the good. Economists have long considered national security to be a public good and “uniquely tailored to state monopolization.”
While cybersecurity is largely provided by private companies, some aspects function like those of a public good. Cybersecurity is broadly non-rivalrous in that security for one user generally does not diminish the security of another user; a software patch benefits everyone who uses it without harming others. Cybersecurity is non-excludable in that measures taken by one user, firm, or industry often protect others regardless of their actions, which can lead to free riding.
The public core of the internet, which is fundamental to ensure the internet is available for use by everyone, is perhaps the most prominent example of a cyber public good.
Free Riding
Public goods are often subject to the problems posed by free riders. Little incentive exists for users to provide for continued consumption if a larger organization, such as a government or corporation, takes on this role.
In the case of cybersecurity, there are both IT and cybersecurity market failures that can play out in actions on both a micro and a macro scale. For example, an employee at a large firm has little incentive to update her network password on her company email, under the assumption that her firm’s systems are safe and her cybersecurity at the firm will be provided regardless of her actions.
On a macro level, firms make assumptions about the state of cybersecurity in the system overall, trusting that their government or technology companies will take adequate measures to defend cyberspace. This can lead firms to inadequately arm themselves and their systems and, instead, to free ride on the protections of their national governments.
National Security Gap
A core proposition of governments is that they will protect their citizens from internal and external threats, especially those against which they cannot realistically defend themselves.
In cyberspace this has been less true until very recently, as individual citizens and organizations have largely been expected to defend themselves against threats, including those posed by intelligence and military forces of nuclear-armed rival states. Sony Pictures Entertainment learned this lesson after a major data breach in 2014 caused by “members of units of the Reconnaissance General Bureau, a military intelligence agency of” North Korea.
Solutions
Governments sometimes allow monopolies to handle public goods. For example, the armed forces and intelligence agencies provide national security, and utility companies provide electricity to localities. Public electric utilities can charge slightly higher prices to provide more reliable services generally; they can do the same to improve their cybersecurity to prevent or respond to cyber incidents.
To help deal with free riding, the NCS calls on regulators to ensure that regulated entities in under-resourced sectors can appropriately charge customers for cybersecurity.
Closing the national security gap demands a different set of solutions to deal with adversaries, such as imposing costs on adversaries or improving defenses. This can be achieved through information sharing, implementing system-wide defenses, or aiding companies facing nation-state threats. The strategy also declares that “we must ask more of the most capable and best positioned actors to make our digital ecosystem secure and resilient.” If all of cyberspace is more defensible, then the national security gap is narrower, allowing enterprises to defend against even more capable threat actors.
Recommendations
To solve these market failures, governments must ensure they are better understood. Regulators—at the federal or state level—should be clear about the specific market failures new rules are meant to address, and provide evidence that there is in fact a failure in the first place.
The Trump administration may lead a new push to deregulate cybersecurity, loosening or completely undoing the many rules instituted over recent years. Any such deregulation should be accompanied by an understanding of the market structure to ensure failures are not created or amplified.
Many market failures are intertwined. Almost all are complex, requiring coordinated actions by various government agencies and elements of society. Regardless of whether this administration will regulate or deregulate, perhaps the most important task ahead is a uniform, targeted strategy for cyber regulation, ensuring coordination across all stakeholders.
A key input of that strategy must be a more refined understanding of the market in each critical-infrastructure sector. Accordingly, the Office of the National Cyber Director, working with the National Economic Council and National Security Council, should task the offices of the chief economist of the sector risk management agencies to better analyze the market dynamics of each sector. It is possible that in some sectors, regulation has gone too far, or in the wrong direction, creating new market distortions. In other sectors, self-regulation—perhaps even a new self-regulatory organization run by a sector itself to police its members—might be preferable to new government rules. And there are some sectors, such as water and wastewater, where updated rules may be needed, perhaps tied to new federal funding.
Cybersecurity failings go back decades, with market failures described as early as 1972. Whether administrations decide to regulate or deregulate, policymakers need to do far more homework to comprehensively understand the implications and chances of success of either path.