The MAGA Case for Software Liability
Improving software security would advance Trump administration goals for American national security, government efficiency, and innovation.

Published by The Lawfare Institute
in Cooperation With
One would normally be safe in assuming that President Biden’s call for software liability is now dead. In its 2023 National Cybersecurity Strategy, the Biden administration highlighted software makers’ ability to leverage their market position to disclaim liability for damages caused by their faulty products. “We must begin,” the strategy declared, “to shift liability onto those entities that fail to take reasonable precautions to secure their software.” Under the current administration, the instinctive inclination of post-Reagan Republicans to rely only on market forces to hold businesses responsible for the consequences of their actions would seem to preclude the use of government policy to improve the security of software vital to business and government operations. Indeed, the Trump team has promised wholesale repudiation of regulations adopted in the past four years, so new limits on industry would seem especially unlikely.
There are, however, good reasons why the new administration should not default to repealing the cybersecurity actions of the past four years and passively accepting severe cyber vulnerabilities in critical infrastructure. In fact, as I explained in a series last year on initiatives aimed at infrastructure and data, much of the Biden administration’s cybersecurity agenda was built on projects launched by President Trump in his first term. The Trump administration would do well to remember the underlying principles that spurred it to initiate these actions the first time around.
To take just one example of the continuity in cybersecurity policy from the first Trump administration through the Biden administration: In 2020, the first Trump administration launched a rulemaking to implement the Cybersecurity Maturity Model Certification (CMMC) program, intended to require defense contractors to abide by the contractual assurances they give regarding the cybersecurity of their information systems. Just this past October, the Biden administration adopted a final rule to implement the Trump proposal. A second piece, specifying policies and procedures for actually including the program requirements in Defense Department solicitations and contracts, still awaits finalization. If industry tries to kill CMMC, President Trump should recall that this is his initiative, intended to end the widespread practice of government contractors failing to deliver on the security promises they make.
Likewise, it would be shortsighted to overturn brand new rules limiting foreign adversaries’ access to Americans’ sensitive data and restricting the import of connected vehicles and components made in China or Russia. Both expressly build on and seek to implement Trump’s Executive Order 13873 declaring that the acquisition or use in the U.S. of information or communications technologies or services designed, manufactured, or supplied by persons controlled by foreign adversaries represents “an unusual and extraordinary threat to the national security, foreign policy, and economy of the United States.”
Given the dangerous failure of markets to deliver critical infrastructure cybersecurity necessary for America’s economic prosperity and national security—and given the steady stream of disclosures of Chinese infiltration—there is good cause for the Trump administration to work to fill other gaps in America’s cybersecurity framework. This is true with respect to not only hardware and networking practices but also the software upon which critical infrastructure and government operations depend.
There is no denying that U.S.-produced software contains multiple vulnerabilities, which criminals and foreign adversaries have been quick to exploit, at great cost to businesses, nonprofits such as health-care providers, and government agencies. I have argued, as have others, that software developers should be liable to businesses and other users that suffer losses due to flaws in their software, just like producers of faulty products in other sectors. But to address the nation’s cybersecurity crisis, one need not wait for legislation establishing a product liability rule for software. There are meaningful incremental steps that the administration could take to advance the security of American software, starting with industrial control systems at the core of critical infrastructure and the software the government itself purchases.
There are at least three reasons why MAGA values support at least some government action to realign incentives around software security.
Improved Software Security Is Necessary to Win Against China
The U.S. government is on track to spend $5 billion to rip China-made switches from the domestic telecommunications infrastructure and replace them with products made in the U.S. or allied nations. The program was advanced by an increasingly broad series of laws that President Trump signed in 2017, 2018, and 2020 in his first term, based on the concern that the Chinese government could force China-based companies to manipulate their equipment to spy on or disrupt U.S. communications.
All that taxpayer money would be wasted (DOGE, anyone?) if Chinese attackers could walk in the front door of the telephone network, exploiting the vulnerabilities of Western-made hardware and software and the poor cybersecurity practices of American carriers.
Yet brushing past the weak security and outdated equipment of U.S. providers seems to be exactly what the Chinese did in what has been called the worst telecom cyberattack in U.S. history. It became public last fall that the China state-sponsored group known as Salt Typhoon listened in on audio calls in real time, obtained call records data on perhaps millions of users, and accessed the system that logs U.S. law enforcement requests for wiretaps. There is no indication that the attackers exploited any backdoors in any China-made equipment that remains in U.S. networks. The fault, it appears, was not in the Huaweis and the ZTEs, but in our own products and practices.
That sophisticated Chinese hackers were able to burrow deeply into American communications networks should come as no surprise. While the Federal Communications Commission has been busy—and rightly so—targeting China-made equipment in the communications backbone and revoking the licenses of China-based companies that had been authorized to operate in the U.S., it has left largely unregulated the security practices of domestic providers and the security of hardware and software produced in the U.S. or allied nations. The FCC’s hands-off approach, leaving cybersecurity to voluntary corporate action, flew in the face of its express statutory mandate to regulate the nation’s communications networks “for the purpose of the national defense.”
On Jan. 15, the commission adopted a declaratory ruling finding that, effective immediately, Section 105 of the Communications Assistance for Law Enforcement Act (CALEA) affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications—but it did not specify what this means in practice. Instead, the commission approved a notice of proposed rulemaking to require communications service providers to submit an annual certification attesting that they have created, updated, and implemented cybersecurity and supply chain risk management plans. Commissioner Brendan Carr, who became FCC chair on Jan. 20, dissented and forcefully criticized both actions. The rulemaking is unlikely to advance, and it seems probable that the CALEA-based declaratory ruling will not be enforced and may be revoked.
That’s not necessarily a bad thing—a much more concrete effort is needed than the one contemplated in the Jan. 15 action. The proposed rulemaking called for telecommunications carriers to adopt cybersecurity plans. Merely having a plan, however, is not enough; such process-based approaches to security can readily devolve into a triumph of compliance over security. An effective framework for telecom security would include measures that focus on product features and performance, such as those described in the cross-sector cybersecurity performance goals developed by the Cybersecurity and Infrastructure Security Agency (CISA).
Moreover, the problem is not limited to telecommunications companies. Regulating the practices of the telcos would still leave them vulnerable to the weaknesses in the hardware and software that they procure and use in their networks but over which they currently have little control. Moreover, electric power systems, pipelines, oil production, transportation, the chemical industry, food processing, and many other critical sectors are all dependent on connected devices that monitor and control physical actions and that could cause real harm if compromised.
Too many of these industrial control devices are dangerously vulnerable. Some use default passwords or aren’t even password protected. Others do not protect the integrity of messages during transmission. The tempo of CISA advisories announcing newly discovered flaws in industrial control system devices is shocking. Moreover, the same flaws show up time and again, and they are often the very same software flaws, with the very same root causes, that infect other software, such as cross-site scripting (first on the list of the 25 most dangerous software weaknesses compiled by the MITRE Corporation) and SQL injection (third on the list). (A weakness is a type of software defect that can be exploited to open multiple vulnerabilities across many products. The MITRE list is based on how much damage a weakness can cause and the frequency with which they are exploited in the wild. Cross-site scripting and SQL injection are briefly explained below.) One industrial control product that was the subject of a recent advisory has four software vulnerabilities attributable to weaknesses on the Top 25 list.
Foreign adversaries are exploiting these vulnerabilities; indeed, it has been publicly acknowledged for some time that adversaries have been probing industrial control systems as a means to achieve strategic advantage. The market has not generated sufficient incentives to deliver security: Critical infrastructure operators, it seems, have little or no power (or no incentive to exercise power) over the makers of the software-infused devices they use.
A first step the new administration could take to block China from our critical infrastructure would be to address software vulnerabilities in products used by regulated critical infrastructure. It could begin by revising the existing directives for gas pipelines and railroads to specify that, after some transition period (which may be quite lengthy), covered entities cannot buy new hardware or software products that fail to implement innovations (described further below) that can eliminate certain classes of vulnerabilities. The directives, which run for just one year, will need to be renewed later this year anyhow, and there is a pending rulemaking that also may provide a good vehicle for this.
Likewise, under Chairman Carr, the FCC could begin an inquiry into the vulnerability of software used in products that telecommunications carriers depend on. And the Federal Energy Regulatory Commission could work with the North American Electric Reliability Corporation to review their standards as they relate to software vulnerabilities in the bulk electric power system. The goal would not be to dictate technology choices; rather it would be to require regulated critical infrastructure entities to stop using equipment with well-known and actively exploited vulnerabilities that can be avoided with available techniques.
Software Liability Will Reduce Government Waste and Fraud and Increase Efficiency
The sorry state of software security is a source of government waste and inefficiency. The federal government is a major purchaser of software. Much of that software is riddled with flaws. When those flaws are exploited, it imposes unnecessary costs on the government and drains resources from important operations. As the Cyber Safety Review Board found in the case of the Microsoft Exchange Server incident, poor security practices impose “significant costs” on users, including the government.
Last year, a process went into effect that requires companies supplying software to the federal government to attest that they developed their products in accordance with certain practices that could reduce the number of flaws. In the closing days of his administration, President Biden issued an executive order requiring software vendors to submit evidence that they have actually complied with the software development practices specified by the government.
But contrary to press reports, the order did not require software companies selling their products to the federal government to prove they included “ironclad security features” that can thwart Chinese hackers. The Biden attestation scheme does not involve features at all. It requires software developers to adhere to certain development processes, in the hope that better processes will produce better products. Even then, the processes called for are not very ironclad. For example, the attestation form that is the basis of the system requires developers to attest that their software development environment is secured by encrypting sensitive data “to the extent practicable” and to make “a good-faith effort” to maintain trusted source code supply chains.
A stronger, more direct approach would focus not on the development practices of software companies but on the features of the software they produce. It would require vendors to attest that they have taken measures to avoid specific common weaknesses for which there are readily available avoidance techniques. Vendors would be free to choose how to address these problems. But if software purchased by the government has one of these common weaknesses, and if there was a technique that could have eliminated that weakness, the vendor should be liable.
A list of dangerous features to be avoided in government purchases is readily at hand in the “bad product practices” called out by the Cybersecurity and Infrastructure Security Agency. CISA addressed just a handful of the most dangerous weaknesses in software. For example, injection vulnerabilities arise when user inputs are incorporated directly into a website’s content or passed to a software interface that interprets the input as code. One common injection vulnerability is known as cross-site scripting (XSS). This weakness allows the attacker to insert malicious code and exfiltrate or modify data. Similarly, other injection vulnerabilities can enable an attacker to incorporate malicious code into a query that gets sent to a database managed with the Structured Query Language. This attack, called SQL injection, allows the attacker to access or modify the database. XSS and SQL injection rank number one and three in the list of most dangerous software weaknesses. For both, there are readily available techniques that eliminate the entire class of vulnerabilities, described below.
So rather than asking software developers to attest to their development processes, the government should get right to the bottom line and focus on results: It should require software vendors to attest that their products do not allow inclusion of user-provided input in SQL queries or in operating system command strings, that they do not use default passwords, that they do not contain known exploited vulnerabilities (including in open-source components), that they provide for multi-factor authentication, and that they have the capability to gather evidence of intrusions as a baseline feature.
There is precedent for this. The expensive and halting, but clearly justified, effort to remove China-made equipment from chokepoints in the U.S. telecommunications infrastructure, supported in the past by President Trump and Republicans, began with a prohibition against Department of Defense purchase of services that used China-made switches. That is, it began as a government procurement specification. An effort to improve software security could likewise begin with a procurement rule barring the wasteful expenditure of government funds on software with well-understood flaws for which there are well-understood solutions.
Software Liability Will Promote Innovation
One of the most powerful arguments against regulation of technology is that it will impede innovation. And the promotion of innovation is clearly a pillar of MAGA philosophy. “Innovation” appears 81 times in the Project 2025 Mandate for Leadership, and the promotion of innovation was a through line in President Trump’s recent executive orders on artificial intelligence and digital assets.
But what if companies with dominant positions are not innovating, and the legal and market incentives do not favor those who are? What if large tech companies are opposed to accepting responsibility for their failings precisely because they want to keep operating in the same insecure ways, at the expense of innovation (or, put more precisely, prioritizing innovation in features and functionality over innovation in security)?
Failure to innovate is surely one reason why so much software has so many vulnerabilities. Year after year, software developers keep introducing into their products the same exploitable weaknesses. Year after year, the same root causes of software vulnerabilities occupy the top ranks of lists such as the MITRE list of the Top 25 most common software weaknesses or the Top 10 list of web application vulnerabilities maintained by the Open Web Application Security Project. Indeed, 21 of the weaknesses on the Top 25 list for 2024 were carryovers from the 2023 list. Over the span of 2019-2023, there were 15 weaknesses that were present in every list.
What’s important from a MAGA perspective is that these same vulnerabilities constantly reappear because some developers have refused to innovate and are still using half-century-old software languages or coding practices that are inherently likely to yield certain classes of flaws. Or they are failing to adopt innovative techniques that could eliminate entire classes of flaws.
Start with coding languages that do not control how memory can be accessed, written, allocated, or deallocated. These “memory-unsafe” languages are inevitably prone to coding errors, which in turn can lead to vulnerabilities that allow an attacker to illicitly access data, corrupt data, or run malicious code, gaining control of the account running the software. Last year, Google estimated that 75 percent of vulnerabilities used in zero-day exploits are memory-safety vulnerabilities. All told, at least 37 types of software vulnerabilities have been associated with memory unsafety.
Yet developers continue to write their software in memory-unsafe languages, the most common being C and C++, favored for their flexibility and performance. Instead of adopting more innovative languages, these developers write complicated internal software development practices that, in essence, instruct their coders to try harder to prevent vulnerabilities. But even proponents of these secure software development practices admit that no reasonable amount of effort—no amount of employee training or admonitions to be careful or pre-release testing—will prevent all of these flaws if the underlying coding language is not memory safe. Bad guys will find the flaws that the developers miss.
In recent years, newer, innovative coding languages have been developed that are not susceptible to memory-safety vulnerabilities but also do not wastefully consume time and computing resources in the way that older memory-safe languages did. One memory-safe language is Rust, and it is now mature enough for Google to adopt its use in Android for new code: In 2022, Android 13 introduced 1.5 million lines of Rust with zero memory-safety issues. (Microsoft, too, has rewritten part of its Windows operating system in Rust, although as of April 2023 the effort was admittedly modest and the company has said more recently that it remains committed to its version of the memory-unsafe C language.)
Given the effectiveness of memory-safe languages in eliminating entire classes of software vulnerabilities, the Biden administration came very close to saying that those who develop new product lines in memory-unsafe languages should be liable for the costs of their failure to innovate where readily available alternative memory-safe languages could be used. That last caveat—“where readily available alternatives could be used”—is precisely the standard that juries composed of ordinary citizens apply every day in product liability cases. Why should software be immune from the same scrutiny?
But a huge amount of memory-unsafe code is in active use and likely to remain so for some time, even decades. This suggests that the broad calls from the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and others for large-scale transition to memory-safe languages are unrealistic. Even a software design innovator such as Google has hundreds of millions of lines of memory-unsafe C/C++ code that is in active use and under active, ongoing development. To compound the problem, large elements of any given software package are actually borrowed open-source code, and open-source code is as bad as any other when it comes to memory safety.
There may be promise in ongoing research to develop tools that would convert software written in memory-unsafe languages into Rust or other memory-safe languages. The Defense Advanced Research Projects Agency is supporting work on the problem, and I know of at least one early-stage startup aiming at automated translation. But the task is massively complicated, and researchers at both Google and Apple have concluded that a large-scale rewrite of existing C/C++ code into a different, memory-safe language is impractical.
Grappling with this reality, innovation is occurring—and needs to be further encouraged—to address the problem of memory safety within the world of C/C++. At Google, for example, engineers have found that, by applying a subset of preventative memory-safety mechanisms, they can partially prevent classes of memory-safety issues even in code written in memory-unsafe languages. Researcher Tal Garfinkel and his team at University of California, San Diego are finding that a technique called sandboxing can achieve substantially improved security properties in memory-unsafe code with low programmer effort and reasonable runtime, and without wholesale rewriting of code. The approach is to constrain what software modules, including third-party libraries, can do—to confine them in a sandbox—so that the inevitable flaws cannot escape and cause wider damage. Others have demonstrated the effectiveness of sandboxing.
There are other measures that provide subclasses of memory safety and can be applied to existing code at scale. Google has a technique for ensuring what is called bounds or spatial safety, which it has rolled out at scale for its server-side workloads. Researchers at Apple have also developed a technique to address bounds or spatial safety.
Companies that find it impossible to transition away from memory-unsafe languages should be incentivized to innovate with these partial steps.
Addressing the issues with memory safety is not the only area where innovation can eliminate entire classes of vulnerabilities. While traditional secure software development practices give developers complicated rules for how to avoid flaws such as XSS and SQL injection—rules that must be consistently and correctly applied all throughout an application’s codebase—approaches that largely eliminate these classes of vulnerabilities are readily available. For example, XSS risk can be addressed by defining a set of vocabulary types to represent strings that are safe for web platforms. Likewise, changing the design of software interfaces to require a domain-specific vocabulary rather than allowing plain strings of input will ensure that untrusted and potentially malicious strings simply cannot be incorporated into an SQL query. As Christoph Kern of Google has explained, these innovative approaches change the design of software interfaces to comprehensively separate untrusted data from trustworthy code fragments, and they centrally ensure that untrusted data is used only when safe. Google has found that these approaches have nearly eliminated XSS vulnerabilities and SQL injection in software built on these secure-by-design foundations. CISA also has recommended a set of practices that would systematically prevent the introduction of SQL injection vulnerabilities (such as by consistently enforcing the use of parameterized queries) and operating system command injections.
Market incentives and government white papers have not been sufficient to generate widespread adoption of these innovations in software security. But research suggests that regulation can promote innovation, especially when the regulation provides certainty. Realigning incentives does not require the government to pick winners and losers from among the available techniques. As I concluded last year: Don’t focus on the process; focus on the outcomes. Legal and policy approaches to software security need not dictate the particular mechanism that a developer should use on a particular product. Instead, the inquiry should focus on outcomes: Would a given flaw in memory safety, cross-site scripting, or injection have been avoided if the developer had applied any of a range of reasonably available alternatives that would have effectively eliminated the entire class of vulnerability? The answer may be very context dependent, but companies should not be allowed to do nothing. Certainly not when their products are used by critical infrastructure or the government.
***
Insecure software leaves America weak. As House Homeland Security Committee Chair Mark E. Green stated last month, China is burrowed into our infrastructure and it has been for years. U.S. agencies and foreign allies said the same thing a year ago, in an advisory entitled “PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure.” Just in the past two years, the Chinese government has broken into the systems of the departments of State, Commerce, and Treasury—and probably more that we don’t know of yet.
MAGA analysis recognizes the dangerous dominance of Big Tech. When market forces yield a form of dominance resistant to innovation, the arguments against regulation weaken. The presence of foreign adversaries in America’s infrastructure and the inefficiencies imposed on the government by insecure software should overcome any residual hesitancy to intervene.