Amid Federal Push for AI Innovation, Who Will Look Out for Consumers?
With AI innovation bound to accelerate under new federal policies, state attorneys general emerge as vital consumer protectors.

Published by The Lawfare Institute
in Cooperation With
Every wave of technological innovation presents regulators with a difficult question: How can we accelerate further advances and widespread adoption while also safeguarding core consumer interests, such as privacy and autonomy? Artificial intelligence (AI) is no exception. The answer to that question has so far gone largely unanswered, and consumer well-being demands one sooner rather than later. Enforcement of existing consumer protection laws by state attorneys general (AGs) offers the best chance of safeguarding consumers while not duly impeding AI innovation. Broader recognition of the ongoing role of AGs as consumer advocates can allow Congress and the state legislatures to focus on different aspects of the public policy challenges posed by AI.
The Trump administration is primed to speed up AI innovation. Its efforts to achieve “global AI dominance” will likely only grow in the coming weeks and months. Additionally, DeepSeek, a Chinese company, recently released a highly capable, partially open-sourced model that rivals OpenAI’s frontier models and, in doing so, sent shockwaves across the AI community and Capitol Hill. A focus on winning the AI race will likely render consumer protection issues an afterthought in federal conversations.
State legislatures will likely fail to immediately step in to safeguard the general public. Any consumer protection legislation passed by state legislatures will likely not go into effect for several months, if not longer. For instance, the effective date of the Colorado AI Act—which was passed in 2024 and aims to regulate frontier AI models—is Feb. 1, 2026. Even when in place, how effectively state regulators will enforce consumer protection laws remains an open question. They may lack the technical know-how to distinguish risky products from novel, yet safe offerings. They might also err on the side of over-regulation, denying consumers the benefits of AI. In the interim period before proposed and enacted state laws formally kick into gear, it may seem as though consumers find themselves with little to no recourse against AI bots, agents, and other tools.
This regulatory vacuum, however, does not mean that states lack the tools to protect consumers from AI-related harms in the immediate future. State AGs are uniquely positioned to fill this gap through existing consumer protection frameworks that have successfully balanced innovation and public safety for decades.
AGs, unlike the aforementioned legislators, have broad mandates to look out for consumers and to punish AI companies that release harmful products or rely on deceptive marketing schemes. Enforcement of state unfair and deceptive acts or practices (UDAP) statutes by state AGs has long served as one of the primary means of achieving this balance. UDAPs are technology agnostic. They do not directly stymie AI development and research. Instead, UDAPs prevent private actors from engaging in certain practices that result in consumer harm. This iterative, responsive approach to safeguarding consumers may represent a preferable approach to AI-specific legislation that may gravitate too far toward one end of the aforementioned balance.
Ongoing AI Threats to Consumers
Continued progress in AI will have very real, significant impacts on everyday Americans. In a world in which the federal government focuses its political capital on AI investment and state legislators lurch from one sweeping regulatory proposal to the next, those Americans will have to look to other actors to impose some bumpers on AI’s integration into general society. AGs have so far shown a willingness to do just that. So long as that willingness persists and AGs receive the necessary technical and financial support to carry out their mandates, this may be the happy medium that allows the U.S. to lead in AI while also looking out for consumers.
Bad actors’ (including everyone from criminals to inattentive companies) misuse of AI tools has imperiled consumer well-being. For those unfamiliar with how criminals exploit AI tools for their nefarious ends, the FBI produced a clear, albeit dry summary in a December 2024 warning:
The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.
Criminal exploitation of generative AI has manifested in several novel versions of old ploys. Through voice cloning scams, bad actors have convinced family members to turn over large sums of money in response to calls allegedly from their loved ones in dire situations. Through emails and texts vividly describing or depicting fictitious natural disasters, scammers have managed to trick consumers into sending relief to victims of events that have not occurred. And through advanced phishing campaigns, criminals have managed to convince unsuspecting persons to send sensitive information that may then be used to steal their identity, among other harmful outcomes.
Misuse of AI is not solely the realm of criminals. Consumers have suffered from other socially harmful uses of AI, such as the use of pricing algorithms by landlords. As alleged by the Department of Justice, apps such as RealPage have allowed landlords to maximize proafits by collectively sharing nonpublic data with the app, which then produces a recommended rent for each unit. Adoption of those recommendations allows each landlord to squeeze as much as possible from renters. Hotels and grocers have also made use of algorithmic pricing tools that may run afoul of state and federal antitrust laws.
The development of AI may also place consumers in a compromised position. AI labs such as OpenAI and Anthropic rely on incredibly large datasets to train their models. Absent quality data, their models produce less and less helpful results. To avoid relying on synthetic data, labs have developed novel ways to gather as much data from their users as possible. OpenAI, for instance, retains user’s prompts to further train its models. By default, it also stores personal information such as your profession and location to tailor the model’s responses. While this is useful in some contexts, it may infringe on a user’s sense of privacy and autonomy in others.
The Role of State Attorneys General in Safeguarding Consumers
Over the past few years, as AI has worked its way into the daily lives of consumers, state AGs have supported a high-level national policy that embraces AI innovation while also insisting on robust enforcement of consumer protection laws. They appear poised to continue this function in the future. Their continued success in shielding consumers from criminals’ scams and companies’ sketchy products may be the best means of achieving the balance outlined above.
A survey of state AG actions reveals bipartisan attention to AI-amplified threats to consumers. In many cases, AGs have served as advocates by pushing Congress and federal regulators to take a closer look at the harms posed by AI. For instance, Alabama AG Steve Marshall joined dozens of AGs in calling on Congress to study the exploitation of children by AI. Relatedly, Arizona AG Kris Mayes and a coalition of other AGs urged the Federal Communications Commission to forcefully apply the Telephone Consumer Protection Act against marketers using AI to impersonate a human voice.
In other instances, AGs have leveraged state laws to hold bad actors accountable for misuses of AI. South Dakota AG Marty Jackley, for example, successfully lobbied for and enforced a new law related to computer-generated child pornography. Former Vermont AG T.J. Donovan sued Clearview AI for collecting images of Vermonters from the web and then using AI to “map” those people’s faces. Several AG offices have signed on to the aforementioned Justice Department suit against RealPage.
UDAP enforcement has been and will continue to be a particularly important means of protecting consumers amid the rapid adoption of AI. As outlined by California AG Rob Bonta, the state’s UDAP applies to several of the most commonly expressed consumer protection AI concerns such as exaggerations of an AI tool’s capability to the use of AI to impersonate, defraud, and deceive others. Texas AG Ken Paxton has gone a step further by enforcing the state’s UDAP against an AI company. In September 2024, Paxton reached a settlement with Pieces, which provided its AI tool to at least four major Texas hospitals. In theory, Pieces’s AI product can quickly and accurately summarize patients’ information for medical providers. In practice, as alleged by the AG, the product was far less accurate than advertised. The settlement required Pieces to update its accuracy estimates and ensure hospital staff know of its limitations. This action signals that hyper-specific legislation attempting to shape the application of AI in every context is likely unnecessary and may do more harm than good by creating a confusing regulatory landscape. A patchwork approach to regulating AI use by different professions, for instance, may discourage beneficial applications of novel tools. AI developers may even avoid creating tools for certain use cases if the regulatory picture is too complicated.
AGs have also tried to create more regulatory predictability by clarifying how they see existing law applying in the AI context. Clarifying statements by AGs in California and Massachusetts as to how existing privacy laws and consumer protection frameworks apply to AI, for example, can assuage AI innovators unsure of which actions may subject them to legal scrutiny while also informing consumers of when they may have recourse against bad actors.
The important role of AGs in protecting consumers faces a number of constraints. For one, many AG offices lack the technical capacity to stay abreast of the latest AI advances. Not all AGs have Civil Rights and Technology Initiatives such as the one created by New Jersey AG Matthew Platkin. Additionally, consumer complaint mechanisms are often as intuitive as differential calculus. Consumers who feel cheated or deceived by an AI agent may be unsure of how to report their concerns—depriving them of a justified remedy and the AG of valuable information about emerging threats.
The upshot is that AG offices need help. Civil society members can and should reach out to AG offices to offer their expertise through trainings and briefings. AG offices should also seek guidance on how to improve their consumer complaint workflow. They may want to look to India for an example. Consumer protection authorities there have developed simple mechanisms, such as a nationwide consumer protection hotline, for consumers to easily and clearly report issues.
State legislators concerned about consumers in the AI age should avoid trying to develop novel regulatory regimes and instead focus on helping their AGs with the aforementioned constraints on their capacity. The hundreds of AI bills pending before state legislatures risk distracting them from other pressing issues. Rather than trying to get into the weeds of a technology that changes on a weekly basis, legislators should defer to state AGs to serve as the primary and best defenders of consumer well-being. Legislators truly worried about consumers should encourage their colleagues to invest in the technical capacity of AGs and in the ability of those offices to learn from and work with the general public as AI adoption continues.
***
In an era of unprecedented technological change, there is a temptation to reinvent the regulatory wheel. Doing so in the AI context will only gum up the progress of a transformative technology. Investment in state AGs, which have an existing mandate to protect consumers, represents a more practical and balanced regulatory approach. Their role as primary enforcers of consumer protection laws offers a vital counterbalance to the rapid deployment of AI technologies while avoiding the pitfalls of overly prescriptive regulation.
Rather than awaiting comprehensive AI legislation or relying on a patchwork of state-specific AI laws, empowering AGs to vigorously enforce existing consumer protection statutes provides a more nimble and effective approach. This strategy echoes successful regulatory models from other transformative periods, such as the emergence of mass advertising in the early 20th century, where enforcement of broad consumer protection principles proved more durable than attempts at granular regulation of specific practices.