Cybersecurity & Tech Executive Branch Surveillance & Privacy

Using AI to Improve the Government—Without Violating the Privacy Act

Kevin Frazier
Monday, February 10, 2025, 2:00 PM
Proper use of AI can transform and improve the federal government. Improper use violates the Privacy Act.
Data security (Lewis Ogden, https://www.flickr.com/photos/bitsfrombytes/29590131837, CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

“There is nothing inappropriate or nefarious going on.” That’s how Madi Biedermann, deputy assistant secretary for communications at the Department of Education, described members of the U.S. DOGE Service feeding sensitive data stored by the department into an artificial intelligence (AI) system. Application of the Privacy Act of 1974 to this ongoing disclosure, however, suggests that the DOGE team is running afoul of the law.

Proper Use of AI Can Improve and Has Improved Governance

Elon Musk and the rest of DOGE rightfully believe that AI can aid efforts to modernize outdated processes, identify waste, and otherwise benefit the public. Since the 2000s, federal officials have known that “data mining, machine learning, and AI are powerful tools for strengthening government’s ability to deter or prevent fraudulent activity[.]” Modern advances in AI have only increased the potential of AI and related tools to improve governance.

Under the Biden administration, federal agencies explored numerous ways to integrate AI into their respective mandates. A total of 37 agencies documented 1,757 public AI uses as of late 2024. Agencies turned to AI to tweak internal processes, accelerate and enhance the delivery of services and benefits, and conduct medical and health research. A few examples make clear that agencies had discovered innovative, positive uses of AI. The U.S. Army Corps of Engineers relied on AI to more accurately predict flooding. The Social Security Administration employed AI to proactively identify individuals who may be eligible for, but not currently receiving, critical benefits. The Department of Defense bolstered its efforts to protect sensitive data by implementing AI. That said, agencies faced clear guidelines as to how and when they could deploy AI.

A series of procedural safeguards kicked in if any agency determined that an AI use may implicate the public’s rights or safety. Per OMB Memorandum M-24-10, such uses required the agency to stop and conduct the following steps: complete an AI impact assessment; conduct further testing to show the AI will work as intended; and await an independent evaluation of that AI use, including whether the benefits outweighed the risks. If the use in question passed those steps, the agency faced ongoing obligations, including continuous monitoring and regular risk evaluations. As onerous as these and related obligations imposed under Biden’s 2023 executive order on AI may seem, the Government Accountability Office reported a perfect record of compliance after surveying federal agencies in 2024.

Alternatively, more streamlined frameworks could at once increase adoption of AI by federal agencies while also preventing uses that may imperil the public. For instance, the National Institute of Standards and Technology could establish an AI testing facility or—as proposed by Tina Huang—an “AI testbed” where domain experts would conduct rigorous but streamlined evaluations of AI applications, helping accelerate the transition from testing phase to agency implementation. Skipping such steps, however, may delay AI’s potential to improve the federal government given ongoing public concerns over government use of AI.

The majority of Americans remain highly skeptical of AI use generally, according to a recent survey of nine states conducted by Heartland Forward. They fear that it may imperil their privacy. They suspect it will diminish their professional prospects. They expect it to make biased decisions. Yet they remain optimistic that it can drive positive outcomes in areas subject to government oversight, such as education and health care. A Pew Research Center poll from 2023 reports similar results. Alondra Nelson, who worked for the White House Office of Science and Technology Policy under the Biden administration, likewise told the Washington Post that Americans had concerns about opaque uses of AI by the federal government.

With public backing, AI has immense potential to transform how the federal government operates and serves its citizens. The Department of Education offers a particularly vivid example of this potential. With an annual budget exceeding $100 billion and responsibility for millions of student loan records, the department’s data challenges are staggering. An AI system could theoretically:

  • Identify patterns in successful grant outcomes to inform future funding decisions.
  • Detect early warning signs of schools at risk of failing their students.
  • Optimize the distribution of federal resources to maximize educational impact.
  • Streamline administrative processes to reduce burden on educators and administrators.

The efficiency gains could be enormous. Across the entirety of the federal government, AI-powered analysis could save hundreds of millions of dollars in administrative costs while improving service delivery.

Improper Use of AI Hinders Trust, Governance, and, in the Case of DOGE, Likely Violates the Law

Possibilities of AI-driven efficiencies do not permit, let alone justify, government use of AI. The DOGE team anticipates that feeding Department of Education data, including personally identifiable information (PII), will expose bloat and facilitate a massive cut to the department’s budget and staff. The Washington Post reports that they plan to follow a similar strategy to justify reforms to other departments and agencies.

The Privacy Act of 1974 stands in the way of that plan. The act prohibits an agency from “disclos[ing] any record which is contained in a system of records by any means of communication to any person, or to another agency,” subject to a dozen exceptions. DOGE’s activities appear to violate the letter and spirit of the act. 

PII maintained by the Department of Education surely qualifies as a record under the act, which defines a record as “any item, collection, or grouping of information about an individual that is maintained by an agency[.]”

Entering PII into an AI system likely qualifies as “disclosure.” Courts have ruled that disclosure covers transferring a record as well as “granting access” to a record. Publication of information on an agency’s website, for instance, qualifies as an unauthorized disclosure. Though it is not known which company’s AI system DOGE is turning to, whichever company that may be has received access to Department of Education records. This may be a closer question if DOGE is relying on a model running locally on federal government computers. However, even if that is the case, the mere act of DOGE members having access to Department of Education records likely qualifies as a prohibited form of disclosure. The act forecloses any “nonconsensual disclosure of any information that has been retrieved from a protected record.” 

Though a litany of exceptions may permit an agency to disclose a record, those exceptions do not apply here. Agencies rely most on the following exceptions, when the disclosure is made: 

  1. to officers and employees of the agency that maintains the record, who have a need for the record in the performance of their duties;
  2. under the Freedom of Information Act; or
  3. pursuant to an established routine use identified in the System of Records Notice (SORN) that has been published in the Federal Register.

The first exception does not apply because the members of the DOGE team do not qualify as officers or employees of the Department of Education. The second also does not apply. The third is also inapplicable. A SORN must come from the agency seeking the exception—here, the Department of Education. SORNs serve as an important procedural safeguard for the public. As explained by the Office of Government Ethics, a SORN “describes what information is collected and maintained in the system, how the information is stored and used, and the procedures by which individuals can request access to, or correction of, information about them.”

Violations of the Privacy Act can result in civil and criminal penalties. The applicable civil causes of action are likely unavailable here because they apply in limited circumstances, such as unlawful denial of a request to amend or access a record. Criminal penalties against Department of Education employees may be available. An employee of an agency who knowingly discloses identifiable information in violation of the act may be guilty of a misdemeanor and face a fine of up to $5,000. DOGE members may also face the same criminal penalties if they “knowingly and willfully request[ed] or obtain[ed]” a covered record under false pretenses.

The Silicon Valley Mindset Meets Bureaucratic Reality

The DOGE team’s approach reflects its Silicon Valley origins. “Move fast and break things” works differently when what you’re breaking might be federal law. Yet there’s a genuine idealism in their effort to modernize government operations.

AI can and should transform the federal government. But its potential will go unrealized as long as DOGE prioritizes those transformations over following the law. The Privacy Act’s fundamental principles—individual privacy, limited disclosure, and purposeful use—remain vital in the age of AI. These principles need not hinder innovation if we approach their implementation thoughtfully.


Kevin Frazier is a contributing editor at Lawfare, an adjunct professor at Delaware Law, and a Democracy and Tech Fellow with the Leadership Center for AG Studies
}

Subscribe to Lawfare