A Dynamic Governance Model for AI
The unique risks—and opportunities—of AI demand a policy-neutral, extra-regulatory model for standards and compliance.

Published by The Lawfare Institute
in Cooperation With
As artificial intelligence (AI) evolves from a tool of efficiency to a determinant of policy and governance, a fundamental shift is underway: Technologists are no longer just innovators—they are becoming political actors. Once largely bound to the legislative frameworks crafted by elected officials, the private sector is now setting the agenda, drafting its guidelines, and, in many ways, governing the future of AI.
This transformation raises urgent questions about democratic resilience and the role of public institutions. As AI systems influence economic structures, information flows, and security landscapes, decision-making shifts from public to private hands. It is crucial to develop a governance structure that safeguards democracy while encouraging innovation and preventing capture by a handful of dominant industry players.
The tech industry is centered on innovation and market success. Democratic governments are designed to uphold public trust and protect societal interests. This duality creates an opportunity for collaboration. We introduce a dynamic governance model as a policy-agnostic, extra-regulatory architecture, including public-private partnerships for standards setting and the creation of a market-based ecosystem for audit and compliance. The model builds upon existing institutions and provides a closed-loop process to review existing mechanisms of liability and accountability without necessitating the creation of additional state bureaucracy.
This work began with a concern for technology’s role in driving polarization and radicalization, which erode democratic institutions. An initially techno-deterministic view of the problem—which assumed that technological solutions could solve the issues created by the technology—morphed into a more nuanced approach, acknowledging the role of the business models and incentives that drive the industry. Several beliefs persisted: that industry and government can complement rather than conflict with each other, and that technical innovation is needed to address challenges to democracy. Our conclusions are based on research conducted during the fall of 2024 and winter of 2024-2025. We combined qualitative interviews, legislative analysis, and machine learning techniques to identify patterns in AI policy development. We conducted 49 interviews with industry leaders, policymakers, and congressional staff, ensuring a diverse representation of perspectives. In parallel, we performed a quantitative analysis of 150 AI-related bills introduced in the 118th U.S. Congress, employing natural language processing (NLP) methods to classify and assess legislative priorities.This allowed us to map areas of convergence and divergence between industry and government, shaping a dynamic governance model as a practical framework for balancing innovation with accountability. Our detailed findings are available in the paper “Governance at a Crossroads: Artificial Intelligence and the Future of Innovation in America.”
This piece summarizes our findings and the risks to democracy associated with the rapid and unregulated adoption of artificial intelligence. We present a proposal for a policy framework that establishes governance over time and advances goals across the political spectrum. Lastly, we suggest that this approach can enhance the U.S.’s global technical leadership, lending it the credibility necessary to spearhead the AI revolution.
AI’s Risks: Innovation and Its Unintended Consequences
The ability of AI to process vast amounts of data, automate complex decision-making, and optimize processes has made it a critical tool across industries. However, without governance mechanisms that balance progress with accountability, AI could reinforce biases and undermine democratic institutions.
AI can manipulate information and decision-making on an unprecedented scale. While the effect of misinformation on society has been a topic of debate, generative AI’s ability to turn the cost of producing disinformation campaigns to almost zero is undoubtedly problematic. Social media platforms powered by AI-driven recommendation engines have already shown their capacity to shape political discourse, amplify disinformation, and polarize societies. As AI becomes more autonomous, it poses a significant challenge to governance. While 2024—the biggest election year in history—was largely unaffected, it is still too early to discern the longer-term effects of the erosion of citizens’ trust in democratic institutions.
The development of large-scale AI models requires vast computational resources, concentrating control in the hands of a few major technology companies. Recently, the Chinese startup DeepSeek demonstrated the ability to fine-tune models at a fraction of the cost incurred by its Western rivals. DeepSeek challenged industry norms by leveraging open-source strategies and questioned the dominance of high-budget laboratories. Their rapid progress raised concerns about China’s AI capabilities, prompting allegations that the company evaded U.S. export controls to obtain restricted chips amid growing global tensions about technological competition and regulation.
However, the overall expense of development, including pre-training, remains high. Scaling laws have yet to deliver the promise of artificial general intelligence—the elusive goal pursued by the leading frontier AI labs—and only the largest players can afford the necessary investments. Even if the progressive adoption of open-source practices leads to the commoditization of base models, we continue to see escalating costs and computing demands shifting from training to inference—particularly when AI, especially reasoning models, is used as a replacement for traditional search—requiring even greater computational resources. Without strategic intervention, this centralization risks monopolistic dominance over AI capabilities, further amplifying the influence of corporate interests on public policy.
Industry and Congressional Perspectives
Understanding how the U.S. arrived at this moment of democratic urgency requires a closer examination of perspectives within both industry and Congress about technical innovation, industrial policy, and regulation—or the absence of it. We conducted 49 in-depth interviews with C-level tech industry executives and policymakers, finding consensus on the inadequacy of the current governance model. However, there is little agreement on what should replace it.
Industry Views on AI Governance
Tech industry leaders emphasize the need for regulatory clarity while warning against rigid, one-size-fits-all policies that could stifle innovation. Many companies have proactively implemented self-regulation measures, including ethical AI principles and internal audit mechanisms, to fill the current regulatory vacuum. However, some industry stakeholders privately acknowledge that voluntary commitments are insufficient to prevent harmful AI applications.
One executive described AI as the “most sophisticated propaganda weapon ever created,” underscoring the risks of unchecked algorithmic influence. Another emphasized the importance of liability frameworks: “Companies should have responsibility for the models they create and govern them across their life cycle.” Despite these concerns, many in the industry resist prescriptive regulatory measures, arguing that heavy-handed government intervention could slow progress and undermine U.S. competitiveness.
Industry leaders also highlight the need for flexible governance approaches that accommodate the rapid pace of technological change. Some have advocated for sector-specific guidelines that allow regulators to tailor policies to different AI applications rather than imposing a universal set of rules that risk overlooking the nuances of AI use in health care, finance, national security, or consumer technology.
Congressional Challenges in Setting Policy for AI
Congress faces its own set of obstacles in crafting effective AI policy. Despite increasing knowledge and competencies in the area and a flurry of recent legislative activity, Congress still lacks the political will and urgency to act, choosing to take its time. In the meantime, the lack of centralized AI governance has resulted in a patchwork of state regulations, creating compliance challenges for businesses operating across multiple jurisdictions.
Interviews with dozens of members of Congress and staff reveal a growing awareness of AI’s risks but also deep divisions over the appropriate regulatory approach. One congressional staffer noted, “We don’t want to repeat the mistakes we made with social media, where we were too late to act.” Another congressional staffer highlighted concerns about AI’s impact on national security, stating, “We want to be strategically competitive against China, but we also need safeguards to prevent AI from being weaponized against democratic institutions.”
Despite these challenges, there is emerging bipartisan agreement on the need for AI governance that balances security and innovation and extends the country’s leadership in this space. However, Congress has yet to coalesce around a single framework.
The Dynamic Governance Model: A New Approach to AI Policy
AI governance cannot be treated as a binary choice between unfettered innovation and rigid control. The history of technological diffusion in the U.S. demonstrates that well-structured policy can promote progress while mitigating risks. The approach taken with the development of the internet, for instance, included industry collaboration, legislative oversight, and the adaptation of existing legal frameworks.
Today’s AI challenge requires a similar balance. Industry leaders emphasize the need for flexibility, arguing that overly stringent regulations could stifle innovation and push companies to operate in less restrictive jurisdictions. Policymakers, by contrast, stress the importance of regulatory mechanisms to ensure AI does not exacerbate inequality, increase fraud, or threaten privacy and national security.
The solution lies in a governance model that is adaptive, inclusive, and capable of evolving with technological advancements.
To navigate the challenges of AI governance, we propose a dynamic governance model—an adaptive, public-private framework designed to balance innovation with accountability. The model is a policy-agnostic, extra-regulatory architecture. We are not advocating for any specific legislation or statutory change. The focus, instead, is on a method that can be applied to different policy objectives, from smaller, niche-based policies to larger AI national policy initiatives.
The process starts with the government—executive or legislative—defining a policy goal. It can be an initiative dictated in an executive order, a proposed rulemaking by an existing agency, or a statutory change initiated by Congress. With a policy goal as the objective, the model is invoked. It consists of three core components:
- Public-Private Partnerships for Evaluation Standards
- Government, industry, and civil society collaborate in a structured manner to establish clear evaluation metrics for AI systems.
- These standards ensure AI systems meet ethical, transparency, and safety benchmarks while remaining adaptable to technological advancements.
- A Market-Based Ecosystem for Audits and Compliance
- Independent third-party entities oversee AI audits to verify compliance with established standards.
- A certification system allows companies that adhere to best practices to signal their commitment to responsible AI deployment, fostering consumer and investor trust.
- Accountability and Liability Mechanisms
- Clear liability structures are set through legislative and judicial processes to ensure that AI developers and deployers are accountable for harms caused by their systems.
- Courts, executive agencies, and legislative bodies play complementary roles in enforcing these accountability measures, ensuring legal clarity and predictability.
The process can be initiated at any point of the three stages, depending on the specific situation. It is also evolutive and can adapt as the technology changes.
Enhancing Public-Private Partnerships for AI Evaluation Standards
The success of the dynamic governance model depends on structured public-private partnerships that define evaluation standards, ensuring AI deployment aligns with safety, fairness, and accountability principles. These partnerships go beyond advisory roles and instead operate as active co-regulatory entities. Inspired by sectoral models—such as collaboration between the Financial Stability Oversight Council (FSOC) and the National Institute of Standards and Technology (NIST)—this model envisions industry participation in setting baseline AI safeguards that evolve.
Evaluation standards must remain adaptable for AI governance to succeed. Public-private governance entities would establish flexible auditing protocols, ensuring standards are revised as AI technology advances.
To mitigate regulatory capture, including undue influence by major existing AI labs, the governance framework recommends a rotating leadership structure to prevent any single industry player from dominating the standards-setting process. Furthermore, oversight would be distributed across a mix of regulatory agencies, civil society organizations, and independent auditors to enhance transparency.
Strengthening the Market-Based Ecosystem for AI Audits and Compliance
One of the primary concerns about AI governance is the lack of effective auditing mechanisms to assess compliance. A dynamic governance model envisions an AI auditing marketplace, where independent entities certify AI systems in real time, in both pre-deployment and post-deployment phases. This model draws from the regulatory constructs in industries like food safety (FDA inspections) and finance (SEC and FINRA compliance audits), where continuous monitoring ensures compliance without stifling innovation.
Key components of this AI audit and compliance ecosystem would include:
- Third-party verification bodies that audit AI models based on transparency, robustness, and ethical standards—standards that have been created in partnership with the industry.
- A tiered certification system that rates AI systems based on their adherence to risk-mitigation guidelines.
- A real-time compliance dashboard, allowing regulators to monitor AI performance metrics and ensuring that AI models evolve without violating preestablished safeguards.
By leveraging market-based incentives—such as regulatory fast-tracking for certified AI models or regulatory sandboxes for liability protection over limited periods—the system would encourage voluntary compliance while maintaining regulatory oversight.
We acknowledge that this audit ecosystem does not exist today. One of the features of our proposal is its gradual build-out using market forces. The evaluation standards created through public-private partnerships should include criteria for ongoing auditor evaluation to mitigate the risk of “audit and process theater.”
Accountability and Liability Mechanisms
The final pillar of the governance model ensures clear legal liability structures for AI systems. Many of today’s AI risks stem from ambiguities around legal responsibility—if an AI system causes financial losses, security breaches, or misinformation campaigns, who is accountable?
The use of a dynamic governance model can foster the creation of:
- A shared liability framework, distributing responsibility among AI developers, deployers, and end users.
- Algorithmic traceability requirements ensure that companies document decision-making processes for legal and regulatory review.
- An AI adjudication framework, where specialized courts or regulatory bodies resolve AI-related disputes, ensuring that AI litigation does not overwhelm the traditional legal system.
These accountability mechanisms not only protect consumers but also incentivize companies to proactively develop safer AI models, building on lessons from standards setting and auditing. Moreover, by strengthening the accountability phase, one creates incentives for compliance and participation in standards setting, generating a virtuous engagement loop.The transparent execution of such a model builds trust in all market participants.
An Example of the Dynamic Governance Model in Action
A major financial institution deploys an AI-driven trading system incorporating agentic AI models—that is, models that independently learn and adapt—designed to autonomously execute high-frequency trades. Initially, the system performs well, optimizing returns and identifying arbitrage opportunities beyond human traders’ capabilities. However, within weeks, market regulators notice unusual price swings in certain asset classes, prompting concerns over potential systemic risks.
Using a dynamic governance model, a multistakeholder review process is initiated. The AI system undergoes an independent audit conducted by a certified third-party entity specializing in AI risk assessment.
This audit, mandated under the model’s market-based ecosystem for audits and compliance, finds that the AI agent was placing multiple orders at different price points to drive interest in a group of securities. Subsequently, the agent would place new buy or sell orders, taking advantage of the artificially driven prices. Finally, the program canceled the original orders that drove the security’s price up or down. This is an illegal technique sometimes referred to as “layering.” The AI agent, however, developed this strategy by itself while focusing on its profit-maximizing goal.
With the lessons learned from this incident, the financial regulator, in collaboration with industry stakeholders and civil society groups, engages in an adaptive public-private partnership to update AI risk evaluation standards. These new standards require financial institutions deploying agentic AI models to incorporate risk-mitigation controls with specific testing against illegal techniques such as layering. Companies failing to comply will face legal liability under the accountability and liability mechanisms component of the governance model.
This intervention ensures continued AI-driven financial innovation while safeguarding economic stability. By adapting regulatory frameworks dynamically rather than imposing static restrictions, the model demonstrates its ability to govern complex AI systems effectively without stifling technological progress.
Extending U.S. AI Leadership
The United States has long been a leader in technological growth. Its ability to foster innovation while enacting strategic policies has fueled progress from the Industrial Revolution to the digital age. AI governance presents the next great challenge—one that requires forward-thinking policies.The U.S. can shape the global AI agenda, reinforcing its position as a leader in innovation and governance. Our proposal encourages market-based solutions without necessitating additional regulatory bureaucracy.
The new administration has prioritized maintaining U.S. leadership in AI innovation, ensuring it remains unburdened by excessive regulation, as demonstrated by Vice President Vance’s comments at the Paris AI Action Summit. Rep. Jay Obernolte (R-Calif.), chairman of the House Bipartisan Task Force on Artificial Intelligence and recently appointed chairman of the Research and Technology Subcommittee in the House Committee on Science, Space, and Technology, has advocated for sectoral regulation and incrementalism in U.S. AI lawmaking. A dynamic governance model adopts this approach with an extra-regulatory structure that relies on a close yet transparent relationship with the private sector. It represents a “third way” for AI policy, rejecting the precautionary and restrictive aspects of the European AI Act and the risk-controlling and surveillance-heavy path taken by China as it vies to set global standards. A renewed American emphasis on standards could help the U.S. assert global AI leadership.
The challenge is clear: AI is too powerful to be left unregulated yet too transformative to be burdened with excessive restrictions. The U.S. has the opportunity—and the responsibility—to lead the world in establishing a governance model that sustains both progress and accountability. A dynamic governance model offers a structured, collaborative pathway to achieving this balance, ensuring that innovation thrives within a framework of transparency and responsibility.