Courts & Litigation Cybersecurity & Tech Executive Branch

Products Liability for Artificial Intelligence

Catherine Sharkey
Wednesday, September 25, 2024, 8:01 AM
How products liability law can adapt to address emerging risks in artificial intelligence.
An illustration of a miniature man pushing a grocery card into a laptop screen (Photo: Mohamed Hasan/Pixabay, https://tinyurl.com/3wx3p9t6, Free Use)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s note: This essay is part of a series on liability in the AI ecosystem, from Lawfare and the Georgetown Institute for Technology Law and Policy.

As both government and private parties seek ways to prevent or mitigate harms arising from artificial intelligence (AI), few approaches hold as much promise as products liability. By concentrating on defects in AI products themselves—rather than on the often-opaque practices of AI developers—products liability can encourage safer design from the outset. It can do so by holding manufacturers liable for avoidable harm, thereby compelling them to prioritize the development of demonstrably safer products.

In the United States, products liability law is designed to ensure that manufacturers, distributors, and retailers of consumer products take measures to ensure that their products are safe, or at least not unreasonably dangerous. The rationale for imposing liability only after a product is released into the marketplace and causes harm to an end user (typically in the form of physical injury or property damage) is that doing so will incentivize producers to adequately test their products and add safety precautions before launching them on the market, so as to avoid liability for damages once a consumer is harmed. This deterrence-based rationale for products liability, in theory, induces producers to take “optimal” safety precautions, namely those that are cost justified in terms of damages averted.

For high-risk products, such as pharmaceutical drugs and medical devices, products liability operates alongside governmental regulators (such as the Food and Drug Administration, or FDA). These regulators conduct premarket safety reviews, looking at aggregate data pertaining to the risks and benefits of such products. Using these data, regulators craft safety requirements that must be met before the product can be released into the marketplace. Products liability imposes liability “ex post” (after the harm occurs), creating indirect incentives for producers to take care that are enforced in a decentralized manner by harmed private parties bringing lawsuits against responsible parties. In contrast, governmental regulation occurs “ex ante” (before the harm occurs) through public safety standards that are promulgated and enforced by a centralized public authority.

A products liability approach to regulating AI would entail the key elements of both forms of regulation, namely, close, premarket monitoring of a dangerous “product,” combined with after-the-fact tort litigation. Drawing on my previous work, I argue for such an approach to regulating AI, analogous to how pharmaceuticals and medical devices are regulated, combining stringent ex ante FDA safety and effectiveness review with ex post products liability litigation.

A products liability model, of course, presumes a “product.” The traditional products/services distinction and tangibility considerations for classifying products falls short as concerns AI, especially in the face of new societal risks posed by the digital information economy. A better approach—one that will yield more consistent results even in the context of the AI’s large language model (LLM)-generated information—is one grounded in the fundamental principles that underlie products liability. Courts should allow a particular plaintiff’s claim to proceed when the claim arises from matters that expose society to risks of grave, widespread harm. Under this approach, the massive scale at which AI can cause harm justifies its classification as a product.

Products liability must confront whether an entity down the line, such as a subsequent parts manufacturer or the end user, is also responsible for the harm. AI harms often entail an element of user behavior that has contributed to the eventual harm. Yet this has not stopped some courts from adopting a products liability approach to deciding tort claims against social media platforms for harms allegedly caused by user-driven recommendation algorithms, or from holding that social media platforms are responsible for failing to include safety features in their design that would have prevented user harms. Such cases are trial runs of issues that will inevitably confront courts presiding over AI litigation, including the issue of interactive AI technologies that increase the possibility of users harming themselves and others.

Products Liability: A Promising Framework for Addressing AI

In this section, I elaborate on the promise of a products liability framework for AI by describing (a) the historical evolution of products liability in the face of risks posed by new technologies; (b) the information-forcing role played by products liability that is especially well suited to newly emergent, uncertain risks; and (c) the robust ability for the mature products liability doctrine to handle complex, interactive, and dynamic risks.

The Historical Evolution of Products Liability in the Face of Risks Posed by New Technologies

AI seamlessly fits into the arc of products liability’s history, one marked by a flexible response to emergent technologies that pose novel risks. History demonstrates that products liability arose and evolved to protect members of society from transformative technologies that introduced risks different in kind or degree from what the public had faced previously. Initially, in the 19th century, privity limitations, which limited tort relief to parties that had a contractual relationship with the seller of a product that caused them harm, relegated most products cases to the realm of contract law.

This changed in the early 20th century, when the seminal MacPherson v. Buick case dismantled the privity limitation and ushered in the era of imposing negligence liability on a remote manufacturer (in this case, Buick), or liability for failing to take reasonable safety precautions. With the fall of privity, end users could sue parties with whom they had no contract. In other words, whereas before MacPherson, only the dealership could sue the car manufacturer for a defective vehicle, now the driver and even a pedestrian injured by the vehicle could sue as well. What precipitated this dramatic development? The introduction of the automobile. More specifically, this “thing of danger” that exposed members of society to a hitherto unimaginable scale of risks, particularly to “life and limb.” In Judge Benjamin Cardozo’s memorable words: “Precedents drawn from the days of travel by stage coach do not fit the conditions of travel to-day. The principle that the danger must be imminent does not change, but the things subject to the principle do change. They are whatever the needs of life in a developing civilization require them to be.”

By the mid-20th century, another technological transformation was afoot—this time, the proliferation of mass-produced products. A legal regime designed to ensure the safety of handiworks and products sold one-by-one was no longer adequate in a world of mass-produced goods capable of causing harm at an exponentially larger scale. Justice Roger Traynor summed up: “As handicrafts have been replaced by mass production with its great markets and transportation facilities, the close relationship between the producer and consumer of a product has been altered. Manufacturing processes, frequently valuable secrets, are ordinarily either inaccessible to or beyond the ken of the general public.” Thus was born “strict products liability,” whereby a product manufacturer could be held liable, even when it had taken all reasonable care, for “defective products: products that the ordinary consumer would consider unreasonably dangerous.” In 1965, the Restatement (Second) of Torts (a nonbinding legal treatise that summarizes the state of the law in a field) embraced this strict liability “consumer expectations” test for defective products in Section 402A.

At the time, most products liability cases involved so-called manufacturing (or construction) defects where the end product deviated from its blueprint, such as a Coca-Cola bottle that explodes in the face of a consumer because of a hairline fracture in the glass. Courts articulated strong rationales for strict liability for manufacturing defects: the manufacturer’s superior position to take measures, including testing and tracking data, to prevent product mishaps; the manufacturer’s ability to “spread losses” either by getting insurance or else increasing the price across the board for all end users; and difficulties of proof faced by plaintiffs who, under a negligence regime, had to produce evidence of some failure of reasonable care in the manufacturer’s assembly line and safety protocols.

From the mid-20th century to the turn of the century, the number, and nature, of products cases exploded. Just as the introduction of powerful technology that could harm at scale and mass-produced goods motivated changes to products liability doctrine, so too did the introduction of complex products that pose risks, even when manufactured according to their blueprint and used as intended. Whereas manufacturing defect cases dominated at the time of the enactment of Section 402A, in the ensuing years plaintiffs increasingly brought design defect and failure-to-warn claims. Unlike with manufacturing or construction defects, plaintiffs claimed products were defective or “unreasonably dangerous” even when made according to their blueprint specifications. Design defects claim the blueprint itself is unreasonably dangerous. Failure-to-warn claims allege the manufacturer should have warned consumers about the nonobvious risks of using its product.

The justifications for imposing a strict liability “consumer expectations” test to these types of claims was comparatively weak relative to the propriety of doing so with respect to manufacturing defect claims. Thus, courts instead began to impose a negligence-inflected “risk-utility” test to govern design defect claims, whereby the jury was asked to assess the attributes of the product design and determine whether the positive features outweighed the negative ones. By 1998, the Restatement (Third) of Torts: Products Liability relegated the Section 402A strict liability “consumer expectations” test to manufacturing defect claims and set forth a “reasonable alternative design” test for design defect claims, whereby the plaintiff, in order to recover, had to point to an alternative design of the product that preserved the beneficial features, while mitigating the risks. Although this especially demanding “reasonable alternative design” test has not been widely adopted, a majority of courts today impose some form of a “risk-utility” test for design defect claims and a “reasonableness” test for failure-to-warn claims.

Now, a quarter of the way into the 21st century, we have entered a new stage for products liability in the digital age, once again characterized by transformative technological developments that have revolutionized how consumers receive and use products. The dramatic shift away from in-person purchase transactions at brick-and-mortar stores toward digital purchases on e-commerce platforms has exposed consumers to new risks from products (including those manufactured by foreign manufacturers) that are now more available than ever before. At the same time, the technological and information revolution that has launched the e-commerce revolution simultaneously affords new possibilities for product oversight and safety.

AI is the latest chapter in the information economy era. Given products liability’s historical track record—namely its ability to adapt to technological developments, from the automobile to mass-produced consumer goods, there is reason for confidence in its ability to respond to novel risks posed by AI in the information economy.

The “Information-Forcing” Role of Products Liability Is Well Suited to Newly Emergent Risks

Products liability serves an essential information-forcing role in producing information about products’ risks and benefits. When it comes to a technology that is as opaque as it is powerful, we need all the information we can get.

Consider the example of pharmaceutical drugs. Notwithstanding the fact that manufacturers have uncovered significant information about the properties of drugs—indeed, the FDA requires drug manufacturers to submit comprehensive clinical trial evidence to enable the FDA to conduct its premarket safety review—there is substantial uncertainty as to how the drug will interact with a very diverse population base. Even though the FDA may have dutifully analyzed all existing health and safety data at the time of its approval, it is almost assured that “new risk evidence” will come to light once the drug is prescribed and used by a much larger and more diverse population. Brand-name drug manufacturers therefore have postmarket surveillance duties and a specific federal duty to revise their labels “to include a warning as soon as there is reasonable evidence of an association of a serious hazard with a drug.” 

As the U.S. Supreme Court has explained, “The FDA has limited resources to monitor the 11,000 drugs on the market, and manufacturers have superior access to information about their drugs, especially in the postmarketing phase as new risks emerge. State tort suits uncover unknown drug hazards and provide incentives for drug manufacturers to disclose safety risks promptly.” The ability to bring products liability lawsuits, in other words, particularly those based on design defect and especially failure-to-warn, will make sure that the drug manufacturer is incentivized to continue to test and monitor the safety profile of its drug.

In this way, there is a feedback loop: Liability stimulates the development of additional information about the risk profile of a drug, which induces manufacturers to return with new information to the FDA, which can then impose additional requirements on the basis of this updated information. One could imagine different mechanisms for such postmarket surveillance of products—including ways in which advanced technologies enable regulators themselves to take on this role—but, at the present time, we rely on products liability lawsuits to serve this function.

In the realm of AI, the National Institute of Science and Technology Risk Management Framework calls for exactly that: the implementation of continuous monitoring of deployed products “including mechanisms for capturing and evaluating input from users and other relevant AI actors.” Products liability offers a mechanism by which to force industry to comply.

When one considers the appropriate balance between ex ante regulation and ex post products liability, it is worth thinking further about the current state of knowledge about the risks and benefits of a particular product or activity. The information demands on a regulator can be daunting. Especially when confronting a new technology such as AI, there is a fear that ex ante regulation could stifle innovation. Insufficient information prevents the regulator from imposing “optimal” safety requirements that balance risks against benefits. But, at the same time, inaction creates a regulatory void during which society might face unacceptable levels of danger. A products liability regime could serve as an interim or transitional strategy, not only to impose indirect safety requirements on manufacturers but also to produce more safety-related information over time.

Eventually, the optimal regulatory framework for AI will likely involve an interplay of ex ante regulation and ex post products liability law. We currently find ourselves in a “transition period”—one where ex ante regulatory rules for AI have yet to be enacted. The need for products liability-induced postmarket surveillance for AI technologies is all the greater. It can serve an essential information-forcing role, drawing out the insight regulators need to design ex ante standards while it incentivizes manufacturers to take care by holding them accountable for harm they are already causing.

Products Liability Can Handle Complex, Interactive, and Dynamic Risks

Certain attributes of AI—namely its capacity to “learn” from its users and, relatedly, its interactive nature—are often cited by those who argue that AI is incompatible with a products liability framework. On the contrary, products liability is up to the task. It has matured over the years and shown itself capable of handling complex, interactive, and dynamic risks posed by products, including those like AI that adapt and change postmarket.

One need look no further than a vexing products liability issue faced by the U.S. Supreme Court in Air & Liquid Systems Corp. v. DeVries. In that case, a manufacturer had produced a “bare metal” turbine. Subsequently, another parts manufacturer added asbestos-laden gaskets to the turbine. Workers were thereby exposed to asbestos and brought suit against the original turbine manufacturer for failure to warn. The question posed was whether the original manufacturer could be held responsible for harms that emanated from the asbestos-laden parts that were added after the manufacturer released its product into the marketplace.

A majority of the Supreme Court answered yes. Three essential lessons about the robust and dynamic nature of our products liability regime emerge. First, products liability imposes postsale duties on manufacturers; their duties to test their products and stay apprised of risks does not end when they release the product into the marketplace. The Restatement (Third) of Torts: Products Liability §10 embraces a postsale duty to warn if “the seller knows or reasonably should know that the product poses a substantial risk of harm to persons or property.” In the context of medical products (pharmaceuticals and medical devices), courts have imposed reasonable testing duties as well as postsale duties to warn, in recognition that product risks are not static, and, in turn, the task of mitigating them is not either. As one court held, “The manufacturer has a duty to adequately test its product and design as part of determining whether the risks associated with the product are outweighed by the benefits.” This is the exact type of information a court would need to apply a risk-utility test to a products liability claim. The potential for products liability claims incentivizes companies to conduct postmarket surveillance, and the information obtained through this surveillance informs court decisions on products liability claims. So, AI developers cannot red-team their models predeployment, document their efforts, and wipe their hands of the serious risks that emerge after users begin interacting with their products.

Second, courts recognize that the policies underlying strict products liability are equally applicable to component manufacturers and suppliers. As the California Supreme Court has held: “Like manufacturers, suppliers, and retailers of complete products, component manufacturers and suppliers are ‘an integral part of the overall producing and marketing enterprise,’ may in a particular case ‘be the only member of that enterprise reasonably available to the injured plaintiff,’ and may be in the best position to ensure product safety” (emphasis added). So developers of AI foundation models that try to disclaim responsibility for downstream harms caused by products built on their models may still be liable for creating a defective component.

Third, in a situation like the one before the Supreme Court in DeVries, the manufacturer of a product can even be held liable for risks stemming from another’s activity down the line (be it by another manufacturer or even an end user of the product) so long as the manufacturer had reason to be apprised that such activity would occur. Perhaps most significant of all was the underlying rationale for the Court’s imposition of liability—namely that “the product manufacturer will often be in a better position than the parts manufacturer to warn of the danger from the integrated product.” In other words, it determined that the “bare metal” manufacturer was essentially the “cheapest cost avoider,” or the party that could most readily take measures to avoid (or mitigate) the realization of harms arising from its product. This means an AI manufacturer may still be liable for designing or building a defective product even if a third party introduces the proverbial “asbestos.” Perhaps the manufacturer of an image generator (the turbine) retains a duty to warn or a duty to build safeguards when it learns that downstream users are fine-tuning its product on child sexual abuse material (the asbestos) and using it to generate perverse deepfakes of minors.

Why AI Counts as a “Product”

Products liability holds great promise, but does it even apply to AI? AI undoubtedly challenges the traditional legal approach to defining a “product.” That traditional approach, which rests on a formalistic distinction of products and services, or on an increasingly problematic “tangibility” metric, is ill suited to AI. For AI, we need a new approach that accounts for its ubiquity and potential for exponential harm. Ironically, abandoning the formalistic approach for a flexible definition of a product would be more consistent with the traditional goals of products liability as a framework designed to accommodate new risks presented by transformational technologies. It is time for the doctrine of products liability to evolve again in the face of the novel risks presented by new technology.

Traditional Considerations: The Products/Services Distinction and Tangibility

Historically, courts have drawn a formalistic, bright-line distinction between products, which fall within the realm of products liability, and services, which do not. Yet lacking from the case law is a consistent definition of what is, and what is not, a service.

The distinction between products and services was, at least initially, a reaction to the rise of strict liability. Courts were wary of subjecting professionals (defined as those who provide “professional services”) to the strict liability standard applicable to those who, for instance, manufacture or sell defective products. The services protection against strict liability has thus been applied to physicians, travel agents, accountants, engineers, and architects—individuals who, in the words of influential jurist Roger Traynor, “sell their services for the guidance of others in their economic, financial, and personal affairs” and who, in turn, should not be “liable in the absence of negligence or intentional conduct.”

The products/services distinction often overlaps with courts’ “tangibility” approach to defining a product. Generally speaking, the less an intangible item resembles a professional service, the more likely that courts will hold it to be a product for the purposes of products liability. This makes sense, insofar as a service involves human judgment and discretion, in contrast to the innate, immutable nature of a tangible product. As one California court has held:

A product is a physical article which results from a manufacturing process and is ultimately delivered to a consumer. A defect in the article even if initially latent is ultimately objectively measurable. On the other hand, a service is no more than direct human action or human performance. Whether that performance is defective is judged by what is reasonable under the circumstances and depends upon the actor’s skill, judgment, training, knowledge, and experience.

But this notion of tangibility as denoting a static, physical article has come under strain in the digital age.

Whether vehicles for the distribution of information are products or services has proved thorny, as shown by the hair-splitting approach in Winter v. G.P. Putnam’s Sons. In that case, the U.S. Court of Appeals for the Ninth Circuit dismissed products liability claims against the publisher of The Encyclopedia of Mushrooms, which was, among other things, a guide to mushroom foraging. Plaintiffs had become severely ill from eating mushrooms they had found while using the book. In finding for the publisher, the court distinguished cases where aeronautical charts were deemed products for purposes of strict liability. It held that the “graphic depictions of technical, mechanical data” of aeronautical charts, which “may be used to guide an individual who is engaged in an activity requiring certain knowledge of natural features,” were unlike the mushroom guide, which, instead, was “like a book on how to use ... an aeronautical chart.” Whereas, in its view, “[t]he chart itself is like a physical ‘product’,” the court found that “the ‘How to Use’ book is pure thought and expression.” The Winter court observed that “[a] book containing Shakespeare’s sonnets consists of two parts, the material and print therein, and the ideas and expression thereof. The first may be a product, but the second is not.” Following Winter’s lead, courts have subsequently denied products liability claims arising from travel guides, nursing textbooks, magazine articles, and diet books.

Today, an AI system has already put someone in the hospital by misidentifying a poisonous mushroom. Was its output more like a chart for direct use, or a book on how to use a chart? And should the answer to the question really come down to the physical form of the information or the way in which the information was conveyed?

Contemporary Considerations: Mass Production and Risk of Widespread Harm

In the context of AI, courts should set aside the formalistic tangibility test and instead inquire whether application of products liability to an AI algorithm or platform is justified based on functional public policy considerations, chief among these being the extent to which AI technologies fit the paradigm of mass production and risk of widespread harm.

The Restatement (Third) of Torts: Products Liability provides a doctrinal basis for this approach: “When the applicable definition fails to provide an unequivocal answer, decisions regarding whether a ‘product’ is involved are reached in light of the public policies behind the imposition of strict liability in tort.” In other words, courts can extend the exemption for services further when the underlying justifications for imposing strict liability to products—in particular mass production and marketing, difficulties of proof, deterrence, and loss spreading—are inapplicable to the inquiry at hand.

For example, a federal district court in Hawaii held that scuba diving classes were not products and were thus exempt from strict liability because, “[t]he Diving Program is not, like mass produced goods, marketed to the general public at large, but rather it is marketed to a discrete set of consumers interested in scuba diving or snorkeling off of the shores of Kauai.” Likewise, an Ohio court looked to relevant public policies when it held that a one-of-a-kind swimming pool, built by the defendant contractor, was not a product because the opportunity for cost-spreading via mass-marketed products was nonexistent. Adopting a similar analytical approach, the U.S. Court of Appeals for the Second Circuit held that maps and navigation charts were subject to products liability precisely because they “reached [the plaintiff] without any individual tailoring or substantial change in contents—they were simply mass-produced.”

This approach, if applied in the as-yet-evolving software context, would lead to treating “tailor-made” software as a service and “off-the-shelf” software as a product. This approach finds support in the Restatement (Third) of Torts: Products Liability, which, drawing an analogy to the Uniform Commercial Code, treats mass-marketed software as a “good” and custom software as a “service.”

To what extent might one distinguish mass-marketed LLMs (akin to products) from those that are potentially more alike to “custom made” software (akin to services)? ChatGPT, Claude, and Gemini may all resemble off-the-shelf products. The source code running ChatGPT does not differ from one computer to another. To be sure, the experience depends on user input, but the foundational code base is the same, and an equivalent starting product is mass-marketed to users.

By contrast, an AI agent that was fine-tuned on a corpus of work done by a specific law firm, for example, to train it to produce work product that mimics the firm’s style and expertise, may appear more like a service. In that case, mass-marketing considerations no longer apply as they would to, for instance, the generic form of ChatGPT available to the general public online.

But still, courts have differed in their application of these principles, with products liability cases against social media companies further complicating the product/service analysis. Where some courts looked at the work the manufacturer put into customizing or tailoring a product for a specific subset of the population, a California state court found that, notwithstanding the “common algorithm,” social media platforms were fundamentally distinct from other mass-market goods. According to the court, consumers did not experience the platform that resulted from that common algorithm in a uniform manner. The court, in turn, held that consumer expectations as to the algorithm accordingly differed, and rendered products liability—which, it held, “proceed[s] from the premise that a product is a static thing”—unsuitable. The court reverted to the traditional line that social media platforms are not analogous to tangible products and thus not subject to products liability. 

By contrast, a federal district court in California, presiding over a multidistrict products liability litigation arising from social media, held that the alleged defects—which included endless feeds, lack of screen time limits, ephemeral content, and algorithm design—were sufficiently analogous to tangible products to be deemed products. Specifically, “the Court analyzes whether the various functionalities of defendants’ platforms challenged by plaintiffs are products. For each, the Court draws on the various considerations outlined above (i.e., whether the functionality is analogizable to tangible personal property or more akin to ideas, content, and free expression) to inform the analysis.” While the court’s analysis ultimately identified algorithmic platforms as products, its analysis still hinged on the traditional, formalistic tangibility test.

The tangibility test is likely to sow confusion and discordancy in products liability cases. Instead of hewing to tangibility considerations, courts should inquire whether application of products liability to AI platforms is justified based on functional public policy considerations, chief among these the mass production nature of code leading to widespread exposure of risk of harm. With generative AI embedded in a wide range of products and services, the same underlying mass-produced technology is capable of causing harm at an unprecedented scale.

Conclusion: Toward a Cheapest Cost Avoider Approach

An alternative, which altogether avoids the problems arising from these efforts to draw formalistic distinctions between tangible products and services, is for courts to identify the cheapest cost avoider. Courts have already begun to tread this path. In Brookes v. Lyft Inc., a pedestrian struck by a vehicle driven by a Lyft driver brought a strict products liability action against Lyft, claiming that the Lyft application was unreasonably dangerous because, as designed, it required the driver to take his eyes off the road to respond to queries sent by the Lyft application. While the Florida court did assert that Lyft’s connection to the application is not akin to a doctor’s exercising judgment, key to its analysis deeming Lyft’s platform a product subject to products liability claims was that Lyft should be responsible for harm caused by its application in the same way as other designers because it was in the “best position to control the risk of harm associated with its digital application.”

In the realm of AI, by identifying and imposing liability on the party best poised to avoid and/or mitigate accidents, courts could engage in more straightforward and predictable analyses while, at the same time, advancing the products liability doctrine’s underlying goals and policy considerations.

Many thanks to Jonah Harwood (NYU 2026) for excellent research assistance.


Catherine M. Sharkey is the Segal Family Professor of Regulatory Law and Policy at New York University Law School.

Subscribe to Lawfare