Policy Updated Apr 2026 11 min read

US AI policy: what's actually being decided, and who benefits

Most coverage frames this as a debate about innovation versus safety. That framing is wrong — and it's wrong in a way that's convenient for the companies it protects.


quick · ai
Last updated April 2026. The White House released its National Policy Framework for AI on March 20, 2026. This piece will be updated when Congress acts — or when the courts do.

In the span of four months, the shape of American AI regulation has shifted more dramatically than in the preceding four years. A presidential executive order. A White House policy framework. A DOJ litigation task force. Thirty-eight states that spent 2025 passing enforceable AI laws now watching federal lawyers prepare to argue those laws are unconstitutional.

The coverage has been extensive. Most of it has been accurate as far as it goes. What it has largely missed is the structural argument underneath the policy debate — who benefits from this arrangement, why the current framing serves particular interests, and what it tells us about what happens next regardless of who is in power.

The short version: this is not primarily a story about AI safety or innovation. It is a story about jurisdiction. And the outcome of the jurisdictional fight will determine more about how AI is governed in the United States over the next decade than any technical standard, any transparency requirement, or any liability rule.

What just happened

The sequence matters, so it is worth stating plainly.

December 11, 2025
Trump signs Executive Order 14365 — the “One Rule” order
Instructs the DOJ to establish an AI Litigation Task Force to challenge state AI laws on constitutional grounds. Directs the Commerce Secretary to identify “burdensome” state regulations by March 2026. Directs the FTC to issue a policy statement classifying certain state-mandated bias mitigation as a per se deceptive trade practice. Uses $42 billion in previously allocated broadband funding as leverage to discourage states from passing AI regulations.
January 1, 2026
California and Texas state AI laws take effect — under immediate federal pressure
California’s Transparency in Frontier AI Act and Texas’s Responsible AI Governance Act both become effective. Thirty-eight states passed some form of AI legislation in 2025. The executive order casts legal doubt on all of them.
March 20, 2026
White House releases National Policy Framework for AI
Seven thematic policy areas. Recommends Congress preempt “unduly burdensome” state AI laws. Light-touch, innovation-first approach. Not binding law — a set of legislative recommendations. Notably declines to answer whether training AI on copyrighted content constitutes fair use, deferring the question to the courts.
March 30, 2026
Newsom issues California counter-executive order
Strengthens state AI procurement standards, directs separation of California’s procurement process from the federal government’s, and launches a public engagement effort on AI’s workforce impact. A direct counter to federal rollback.

Congress, for its part, has not acted. The administration attempted to pass a temporary federal moratorium on state AI laws last summer — Congress declined. Most observers believe comprehensive AI legislation is unlikely before the midterm elections in November 2026. Which means the executive order framework, with all its constitutional shakiness, is the de facto regulatory environment for at least another twelve to eighteen months.

The real fight isn’t about safety — it’s about jurisdiction

The administration’s stated rationale for federal preemption of state AI laws is that a “patchwork of 50 different regulatory regimes” creates compliance costs that undermine American competitiveness in the global AI race with China. This argument is not frivolous. Navigating fifty different state frameworks is genuinely expensive, particularly for smaller companies operating across state lines. The EU AI Act comparison is real: Europe has a unified, comprehensive framework; the United States has a fragmented mess.

But the conclusion drawn from this diagnosis — that the solution is federal preemption of state laws, rather than coherent federal legislation — is not the only available response. It is the response that best serves a specific set of interests.

The “innovation versus safety” frame is strategically constructed. The real question is not whether AI should be regulated, but who gets to do the regulating — and what they are willing to regulate.

Here is what has actually happened: in the absence of federal legislation over the past four years, states became the primary site of enforceable AI accountability. California passed algorithmic accountability requirements. Colorado enacted rules for high-risk AI systems. Illinois required disclosure for AI used in hiring decisions. These laws were imperfect, inconsistent, and sometimes technically confused. But they were law — with enforcement mechanisms, penalties, and legal standing for affected parties.

Federal preemption, as proposed in the executive order and the Framework, would void most of this. Not by replacing it with stronger federal protections — the Framework explicitly does not propose an independent AI regulator, a federal liability standard, or a right of action for AI-caused harm. It would void state accountability by replacing it with something weaker, or more precisely, with a set of legislative recommendations that may or may not become law, enforced by sector-specific agencies that may or may not prioritize AI oversight.

What federal preemption actually means in practice

The mechanisms matter here, because they have received far less coverage than the rhetoric.

The executive order does not, by itself, preempt state AI laws — it lacks that authority. Congress holds the preemption power under the Constitution, and Congress has not acted. What the order does is establish structures designed to make state AI regulation expensive, risky, and legally uncertain.

The DOJ’s AI Litigation Task Force, active since January 10, 2026, is tasked with challenging state AI laws on constitutional grounds — primarily the Dormant Commerce Clause, which prohibits states from placing undue burdens on interstate commerce. The argument is that because frontier AI models are developed and deployed by companies operating nationally, state regulations create insurmountable compliance barriers. Courts have significant discretion in applying this doctrine, and its outcome is genuinely uncertain. But the cost of defending a state law against federal constitutional challenge is itself a deterrent — one that will discourage state legislatures from pursuing new AI regulations regardless of what the courts ultimately decide.

The $42 billion in broadband infrastructure funding — previously allocated under the BEAD program — is the other lever. The executive order instructs the Department of Commerce to condition this funding on states refraining from enacting AI regulations deemed inconsistent with federal policy. This is a significant escalation that received limited coverage in most reporting. Using infrastructure funding as leverage to suppress state legislation is an aggressive use of federal spending power, and one that will almost certainly face its own legal challenges.

Why “a patchwork of 50 regulations” is a stronger argument than it sounds

To reason clearly about this, the administration’s best case deserves genuine engagement.

A company building an AI hiring tool that operates across all fifty states genuinely faces a compliance problem of significant complexity. California requires algorithmic accountability and disclosure. Colorado requires impact assessments for high-risk AI. Texas requires transparency in automated decision-making. These requirements are not identical. Their definitions conflict. Their enforcement timelines differ. Legal counsel for a small AI startup trying to navigate all of this is not cheap.

The argument that this complexity chills AI development — particularly for smaller companies without large legal teams — is not without substance. The EU AI Act, for all its complexity, at least provides a single standard. The argument for federal uniformity is a real policy argument, not merely a cover for industry capture.

The problem is that “we need uniformity” does not imply “we need the federal government to preempt state protections without replacing them.” It implies “we need Congress to pass comprehensive federal AI legislation that establishes consistent national standards.” The administration has consistently chosen the first option over the second — not because it produces better outcomes for anyone except the companies whose accountability exposure it reduces, but because it is achievable by executive action rather than requiring the more difficult work of congressional legislation.

What the Framework actually says — and what it deliberately doesn’t

The White House National Policy Framework runs to seven thematic areas. Reading the document itself — rather than summaries of it — is instructive, both for what it says and for the language it uses to say it.

Child safety
Age-assurance requirements, parental controls, data restrictions for minors. The most bipartisan section — and the one area where state authority is explicitly preserved.
Communities
Streamlined data center permitting, enforcement against AI-enabled fraud, preventing electricity cost increases tied to AI infrastructure. Largely infrastructure-focused.
Competitiveness
Regulatory sandboxes, expanded access to federal datasets, reliance on sector-specific regulators rather than a new AI oversight body. No independent AI regulator proposed.
Creators & IP
Supports voluntary licensing mechanisms, digital replica protections, and the NO FAKES Act. Explicitly defers the copyright training question to courts rather than recommending Congress legislate.
Consumers
Fraud enforcement, transparency requirements for AI-enabled government communications. No proposed right of action for private parties harmed by AI systems.
Workers & education
AI integration into education and workforce training. Research on AI’s labour market impact. No proposed protections for workers displaced or harmed by AI deployment.
Not in the Framework
An independent AI regulator. Federal liability standards for AI-caused harm. A private right of action. Accountability mechanisms for model developers. A definition of “high-risk AI.” A timeline for mandatory compliance with any standard.
Deliberately deferred
Whether training AI on copyrighted content constitutes fair use — handed to courts. Whether algorithmic discrimination constitutes a civil rights violation — left to existing enforcement mechanisms.

The copyright training question is worth dwelling on. The Framework explicitly declines to recommend that Congress legislate a definitive answer on whether using copyrighted material to train AI models constitutes fair use. This is framed as appropriate judicial deference. It is also a decision that benefits AI companies, who would prefer the ambiguity of ongoing litigation to a settled legal standard requiring licensing or compensation. Choosing not to legislate is itself a policy choice — one that happens to be worth billions of dollars to the companies most affected by the outcome.

Who wins, who loses, and who isn’t in the room

Policy analysis often avoids distributional questions because they are uncomfortable. This one requires confronting them directly.

Who benefits from this framework
Who bears the cost
Frontier AI labs — no federal liability standard, copyright question deferred to courts, state accountability laws voided or chilled
Large tech companies with national deployments — uniform light-touch standard easier to navigate than 50 state regimes
AI infrastructure companies — streamlined data center permitting, favourable electricity policy
Companies in copyright litigation over training data — ambiguity preserved, no legislative resolution
Consumers in states with strong AI accountability laws — California, Colorado, Illinois protections weakened or voided
Workers — no labour protections in the Framework; research on displacement is not the same as policy addressing it
Creators whose work trains AI — copyright question deferred while training continues
States with democratically enacted AI legislation — facing federal constitutional challenge funded by the DOJ

California’s counter-executive order is important and should not be dismissed. Newsom’s March 30th order — strengthening AI procurement standards, building a separate state compliance pathway, launching a public engagement effort on workforce impact — demonstrates that California intends to remain a regulatory actor regardless of what the federal government does. For a state with the world’s fifth-largest economy, this matters.

But California acting alone is structurally insufficient. The Dormant Commerce Clause challenge is genuinely strong when applied to state laws that effectively regulate conduct occurring entirely outside the state. California can govern what AI companies do when serving California residents, but its reach over the training and development of frontier models — which happen at data centres in other states and affect users nationally — is legally limited. The federal-state standoff requires federal resolution. California can slow federal preemption but not prevent it.

The comparison to financial regulation before 2008 should be made carefully, because it is easy to overstate. But the structural parallel is real: an administration committed to light-touch federal oversight, agencies deferring to industry-led standards rather than prescriptive rules, enforcement reliant on existing legal mechanisms rather than new authority, and a declared intent to preempt stricter state-level regulation. The outcome of that arrangement, in the financial context, was not industry self-correction.

What this means if you’re deploying AI in your organisation right now

The practical implication of all of this is a hybrid compliance environment that will persist through at least 2027 and probably longer.

State AI laws remain in effect unless and until Congress passes preemptive federal legislation or courts strike them down. The DOJ’s constitutional challenges will take years to resolve. Organisations operating across state lines face real uncertainty about which standards apply, when, and to what. The organisations best placed to navigate this are not those that waited for federal clarity — they are those that built AI governance frameworks in 2024 and 2025 that are flexible enough to adapt as the law evolves.

The Framework’s explicit encouragement of “flexible compliance programs” is, unusually for a White House document, genuine advice. Building governance that can accommodate diverging state and federal requirements — rather than betting on a single outcome — is the only defensible approach while the jurisdictional fight plays out.

The governance questions that matter most right now are not primarily legal ones. They are organisational: who in your organisation owns AI accountability, how are deployment decisions reviewed, what happens when a model produces a harmful output, and how would you demonstrate responsible use to a regulator operating under either the current state framework or a future federal one? Those questions have the same answer regardless of which legal standard ultimately applies.

Navigating AI governance and compliance in this environment? Book a conversation with quick·ai →

There is a final observation worth making about how the current policy moment will be understood in retrospect. The decisions being made now — about liability, about jurisdiction, about which accountability mechanisms exist and which do not — will shape the deployment of AI systems at a scale that makes the current moment look modest. The companies building frontier models understand this. The administration’s framework reflects that understanding. The question is whether the public and its representatives will engage with the structural choices being made on their behalf before those choices become irreversible.

The framing as a debate about innovation and safety will persist because it is useful. But the real debate is about power: who holds it, who is accountable for how it is used, and what recourse exists for those who bear the costs of its misuse. That is not a technology question. It is a political one. And it deserves to be treated as such.

quick · ai
Work with quick·ai
If you’re thinking seriously about AI governance, strategy, or implementation for your organisation, I work with a small number of clients on exactly these questions. It starts with a conversation.
Book a free 30-minute call →