Preemption: The White House AI Framework’s Real Design
by J.R. Howell
Beneath the administration’s language of innovation and national leadership, the March 2026 AI framework is best read as a proposal to displace state law, narrow accountability, and reorganize the market on terms that favor scale. A link to the White House's policy paper is below:
On the White House’s four-page AI legislative framework, the most consequential ideas arrive last. Section VII, on page 4, asks Congress to preempt state AI laws that impose “undue burdens,” says states should not regulate AI development, and says they should not be permitted to penalize developers for a third party’s unlawful use of their models.[1] That is the document’s center of gravity. The framework says many other things about children, creators, speech, infrastructure, and workforce training. But its operative project is not the creation of a dense new federal AI code, but rather the displacement of state law. [1]
That matters because states moved first. In the absence of a comprehensive federal statute through 2024 and 2025, states began building materially different AI rules of their own. Colorado targeted algorithmic discrimination in high-risk systems. California imposed frontier-model transparency, incident-reporting, and whistleblower obligations. Texas enacted a broader AI governance law with consumer protections and its own sandbox mechanism. New York adopted the RAISE Act for large frontier developers.[2][3][4][5]
The administration and its allies frame this as harmonization. The White House says a patchwork of state laws threatens national competitiveness, and the U.S. Chamber of Commerce promptly praised the promise of a single national framework.[1][6] That argument is not frivolous. Multistate businesses do bear real costs when state obligations diverge. But uniformity is not self-justifying. It depends on what the national rule contains and whom it protects. Here, the framework would preempt more than it replaces. [1][6]
Preemption as design
The March framework did not appear in isolation. In December 2025, Executive Order 14,365 directed the Attorney General to establish an AI Litigation Task Force whose “sole responsibility” is to challenge state AI laws the administration views as inconsistent with federal policy. The order also directed Commerce to identify onerous state laws, contemplated a federal reporting and disclosure standard, and told Commerce to condition at least some remaining BEAD funding on state compliance to the maximum extent permitted by law.[7] This is an attempt to use litigation, administrative leverage, and the federal purse to reset the balance of AI governance before Congress legislates. [7]
The framework tries to soften that move by preserving some state authority. It says a federal standard should not preempt generally applicable state laws, including laws protecting children, preventing fraud, and protecting consumers. It also preserves state zoning authority and rules governing a state’s own procurement and use of AI.[1] Those carveouts are meaningful. They mean the proposal is not a blanket moratorium. But they are narrower than they first appear. If states cannot regulate AI development itself, and cannot assign liability to developers for downstream unlawful uses of their models, much of the most consequential governance terrain becomes federal by design. [1][7]
That is why preemption is the real story. State AI laws are not merely compliance irritants. They have been the primary venue in which the country has tried to translate diffuse concerns about bias, frontier-model risk, consumer notice, incident reporting, and public accountability into actual legal obligations.[2][3][5] A broad federal override would not just simplify compliance maps. It would change who gets to draw the map, and it would narrow one of the only functioning routes through which consumers, workers, civil-rights groups, and state regulators can shape AI governance. [1][2][3][5]
A federal framework that reshapes the market
The framework’s other design choices point in the same direction. Section V says Congress should not create a new federal AI rulemaking body. Instead, it favors sector-specific regulators, regulatory sandboxes, industry-led standards, and federal datasets prepared in AI-ready formats for industry and academia.[1] Any one of those tools can be defended on its own. Taken together, especially alongside broad preemption, they amount to a relatively thin federal regime with a thicker structural advantage for firms that already possess scale. [1]
Industry-led standards are not necessarily neutral simply because they are technical. In a market defined by extraordinary concentration, firms with the largest compliance teams, cloud footprints, compute budgets, and Washington access are better positioned to shape the benchmarks that later become “best practices.” Scholarship on foundation-model competition has warned that legal institutions can either preserve rivalry or help freeze the field around a handful of dominant actors.[8] The framework’s rejection of a dedicated federal AI regulator does not eliminate governance. It relocates much of it into institutions and processes that powerful incumbents are often best equipped to influence. [1][8]
Regulatory sandboxes raise a similar concern. In principle, they allow firms to test new systems under regulatory supervision. In practice, they can become a form of selective relief. The OECD defines AI sandboxes as controlled environments in which firms may receive waivers from existing legal provisions or compliance processes while regulators observe innovation.[9] In a concentrated AI market, that architecture can favor firms already adept at securing tailored treatment and managing iterative regulator engagement. The March framework praises sandboxes, but it offers little detail about how to prevent them from becoming instruments of privilege rather than learning. [1][9]
The same distributional logic applies to federal datasets. Public data can be a public good. But frontier-model training is not a garage enterprise. The firms most able to turn large public datasets into commercial advantage are the ones with massive compute stacks, cloud infrastructure, and capital reserves. Read alongside the framework’s proposed liability rule, the market signal becomes clearer. Existing U.S. tort law already contains tools that may apply to AI harms, albeit imperfectly and with uncertainty.[10] A federal rule that bars states from penalizing developers for third-party unlawful conduct would not eliminate AI liability. It would redistribute it, and a meaningful share of the resulting exposure would likely move downstream to deployers, enterprise customers, and the public. [1][10]
The governance questions the framework declines to answer
The framework is thinnest where public-interest governance is hardest. It includes a child-safety section, but the obligations are mostly high level: parental controls, commercially reasonable age assurance, features to reduce sexual exploitation and self-harm, and a reminder that existing child privacy protections apply to AI systems.[1] In the same section, the framework warns Congress away from ambiguous content rules and open-ended liability. That is a reactive posture. It says less about safety-by-design duties, independent auditing, remedies, platform architecture, or external oversight than about avoiding litigation. The administration points to the TAKE IT DOWN Act, and that law addresses a serious harm. But it is not a general child-safety architecture for AI systems. [1][11]
Labor is similarly narrow. The framework speaks of AI training, apprenticeships, land-grant support, and further study of task-level workforce realignment.[1] That is better than silence, but it is still principally a workforce-adaptation agenda, not a worker-protection agenda. Brookings has recently emphasized that research on AI and labor remains unsettled and heterogeneous, while labor groups such as the AFL-CIO have pressed for stronger worker voice, collective bargaining, and enforceable protections in workplace AI governance.[12] Precisely because the labor picture is uncertain, the absence of stronger transition tools, bargaining protections, or workplace-governance rules is consequential rather than incidental. [1][12]
The omission on discrimination is sharper still. The framework does not establish a meaningful federal architecture for AI discrimination in employment, housing, credit, education, health care, or public benefits.[1] That silence is significant because state efforts have focused directly on algorithmic discrimination, and civil-rights groups have argued that disparate-impact doctrine remains one of the most important tools for challenging biased automated systems.[2][13] The conflict is not abstract. The December executive order singled out Colorado’s law on algorithmic discrimination as a paradigmatic example of burdensome state regulation.[7] A federal preemption regime without a federal civil-rights substitute does not solve that problem. It mostly relocates it. [1][2][7][13]
Competition and pricing are also largely absent. The framework says almost nothing about exclusivity agreements, cloud bottlenecks, interoperability, or the possibility that AI tools can facilitate price coordination or more opaque forms of individualized pricing.[1] That silence is notable because the FTC has been studying surveillance pricing and has publicly emphasized that firms may not use algorithms to evade antitrust law.[14] A federal AI framework that displaces state rules while saying little about market concentration is not simply light-touch. It is selective. [1][8][14]
Then there is privacy. Beyond the child-safety section, the framework offers no comprehensive federal data privacy regime for training data, retention, consent, deletion, transparency, auditing, or incident reporting.[1] That absence is especially striking because state AI and privacy laws are precisely where many of these questions are now being worked out. Texas’s own AI governance statute, for example, expressly ties AI governance to consumer data protection and breach-response obligations.[4] The March framework asks Congress to nationalize the field without supplying a national privacy baseline. [1][4]
What this means for counsel
For corporate legal departments, the immediate lesson is not that regulation disappears, but rather that risk may re-allocate. Developers may see in the framework an opportunity to press for broader preemption and narrower upstream liability. Deployers should see the opposite. If federal policy shifts more responsibility downstream, then vendor contracts, procurement rules, testing protocols, and internal governance become more important, not less. Counsel will want harder representations about training practices, documentation, bias testing, incident notice, model updates, audit rights, indemnities, and termination rights when model behavior changes materially.[1][10]
Employers and other consequential deployers should be especially cautious about reading federal preemption as a safe harbor. Even under the framework’s preferred approach, generally applicable consumer, fraud, child-safety, and employment laws would remain, and a meaningful patchwork could persist around procurement, sectoral regulation, and state use of AI.[1] At the same time, the executive order’s aggressive posture toward state AI laws points to years of litigation and uncertainty before any stable federal settlement emerges.[7] A lighter-touch framework on paper may therefore produce a more complicated litigation environment in practice. [1][7]
There is still a serious counterargument. A single national rule could lower transaction costs, reduce duplicative compliance work, and help smaller firms avoid navigating a thicket of conflicting state requirements. Business groups and deregulatory advocates have made exactly that case.[6] But the answer depends on the content of the national rule. Uniformity without meaningful federal accountability can just as easily harden incumbent advantage. It can lower the compliance floor while leaving compute concentration, cloud dependence, data access, and standard-setting power where they already are. [6][8]
That is the quiet asymmetry of the March framework. It is skeptical of state experimentation, skeptical of new federal institutions, skeptical of open-ended liability, and comfortable with industry-led standards.[1] Those preferences are coherent. But they are not neutral. They amount to a legal and institutional choice about who should bear the costs of AI failure and who should benefit most from AI scale. [1]
If Washington wants to displace state law in AI, it needs to do more than announce uniformity. A credible federal replacement would pair national reach with national duties: clearer responsibility across the AI supply chain, operational child-safety requirements, meaningful civil-rights protections, baseline privacy rights, transparent reporting and auditing, attention to concentration and price coordination, and a more serious response to worker transition.[1][12][13][14]
The March 2026 framework may yet become the opening bid in a fuller congressional debate. But as written, it is best understood not as a comprehensive public-interest settlement for AI governance, and not simply as a competitiveness manifesto in the administration’s preferred vocabulary. It is a preemption framework. And the legal question it raises is not only whether Washington should set one rule. It is whether Washington is offering enough rule, and enough accountability, to justify clearing the states from the field. [1][7]
Endnotes
[1] The White House, National Policy Framework for Artificial Intelligence: Legislative Recommendations 2-4 (Mar. 2026), available at https://www.whitehouse.gov/wp-content/uploads/2026/03/03.20.26-National-Policy-Framework-for-Artificial-Intelligence-Legislative-Recommendations.pdf (last visited Mar. 20, 2026) (setting out the framework’s child-safety, innovation, workforce, and preemption provisions, including the call to preempt state AI laws that impose undue burdens and the limitation on state penalties for third-party unlawful conduct).
[2] Consumer Protections for Artificial Intelligence, Colo. Gen. Assemb., S.B. 24-205 (2024), available at https://leg.colorado.gov/bills/sb24-205 (last visited Mar. 20, 2026) (requiring reasonable care to protect consumers from algorithmic discrimination in high-risk AI systems and imposing disclosure, impact-assessment, and appeal obligations).
[3] Artificial Intelligence Models: Large Developers, S.B. 53, 2025-2026 Reg. Sess. (Cal. 2025), available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53 (last visited Mar. 20, 2026); Governor Newsom Signs SB 53, Advancing California’s World-Leading Artificial Intelligence Industry, Off. of Governor Gavin Newsom (Sep. 29, 2025), available at https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/ (last visited Mar. 20, 2026) (describing California’s frontier-model transparency, safety incident reporting, whistleblower, and enforcement provisions).
[4] Responsible Artificial Intelligence Governance Act, Tex. H.B. 149, Bill Analysis, 89th Leg., Reg. Sess. (2025), available at https://capitol.texas.gov/tlodocs/89R/analysis/html/HB00149S.htm (last visited Mar. 20, 2026) (describing consumer protections, enforcement mechanisms, a regulatory sandbox program, and AI-related links to consumer data protection obligations).
[5] Governor Kathy Hochul Signs Nation-Leading Legislation to Require AI Frameworks for AI Frontier Models, Off. of N.Y. Governor Kathy Hochul (Dec. 19, 2025), available at https://www.governor.ny.gov/news/governor-hochul-signs-nation-leading-legislation-require-ai-frameworks-ai-frontier-models (last visited Mar. 20, 2026) (describing the RAISE Act’s safety protocol, incident-reporting, and oversight-office requirements for large frontier developers).
[6] U.S. Chamber of Com., U.S. Chamber Lauds Unified National Framework Governing AI (Mar. 20, 2026), available at https://www.uschamber.com/technology/u-s-chamber-lauds-unified-national-framework-governing-ai (last visited Mar. 20, 2026) (applauding a single national AI framework and regulatory clarity for businesses of all sizes); Adam Thierer, White House AI Legislative Vision Stresses Need for a Pro-Innovation National Framework, R St. Inst. (Mar. 20, 2026), available at https://www.rstreet.org/commentary/white-house-ai-legislative-vision-stresses-need-for-a-pro-innovation-national-framework/ (last visited Mar. 20, 2026) (arguing that a light-touch federal framework is necessary to avoid a state patchwork).
[7] Exec. Order No. 14,365, 90 Fed. Reg. 58,499 (Dec. 16, 2025), available at https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensuring-a-national-policy-framework-for-artificial-intelligence (last visited Mar. 20, 2026) (creating the AI Litigation Task Force, directing federal evaluation of state AI laws, and tying certain federal funding consequences to states with onerous AI laws).
[8] Thibault Schrepel & Alex “Sandy” Pentland, Competition between AI Foundation Models: Dynamics and Policy Recommendations, 34 Indus. & Corp. Change 1085 (2025), available at https://academic.oup.com/icc/article/34/5/1085/7942098 (last visited Mar. 20, 2026) (arguing that competition in foundation models can harden around a small number of actors absent legal and institutional choices that preserve openness and rivalry).
[9] Org. for Econ. Co-operation & Dev., Regulatory Sandboxes in Artificial Intelligence (July 13, 2023), available at https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html (last visited Mar. 20, 2026) (defining AI sandboxes as controlled environments that may include waivers from existing legal provisions or compliance processes).
[10] Gregory Smith et al., Liability for Harms from AI Systems: The Application of U.S. Tort Law and Liability to Harms from Artificial Intelligence Systems, RAND Corp. (2024), available at https://www.rand.org/pubs/research_reports/RRA3243-4.html (last visited Mar. 20, 2026) (describing how existing tort doctrines may apply to AI harms and emphasizing uncertainty across the AI supply chain).
[11] President Donald J. Trump Signed S. 146 into Law, The White House (May 19, 2025), available at https://www.whitehouse.gov/presidential-actions/2025/05/president-donald-j-trump-signed-s-146-into-law/ (last visited Mar. 20, 2026) (describing the TAKE IT DOWN Act’s prohibition on intentional disclosure of nonconsensual intimate depictions and platform removal duties); First Lady Melania Trump Joins President Trump for Signing of the “TAKE IT DOWN” Act, The White House (May 19, 2025), available at https://www.whitehouse.gov/briefings-statements/2025/05/first-lady-melania-trump-joins-president-trump-for-signing-of-the-take-it-down-act/ (last visited Mar. 20, 2026) (describing the law as focused on nonconsensual intimate images and deepfake abuse).
[12] Mark Muro et al., Research on AI and the Labor Market Is Still in the First Inning, Brookings Inst. (Mar. 12, 2026), available at https://www.brookings.edu/articles/research-on-ai-and-the-labor-market-is-still-in-the-first-inning/ (last visited Mar. 20, 2026) (emphasizing uncertainty and heterogeneity in AI’s labor effects); AFL-CIO, Artificial Intelligence: Principles to Protect Workers (Oct. 15, 2025), available at https://aflcio.org/sites/default/files/2025-10/Final%20for%20Website%20-%20ARTIFICIAL%20INTELLIGENCE%20Principles%20to%20Protect%20Workers_10.15.25_0.pdf (last visited Mar. 20, 2026) (calling for worker input, collective bargaining, and meaningful enforcement in workplace AI governance).
[13] Chiraag Bains, The Critical Role of Disparate Impact in AI Accountability, Leadership Conf. on Civ. & Hum. Rts. (Jan. 2026), available at https://civilrights.org/wp-content/uploads/2026/01/SNAPSHOT-When-Machines-Discriminate_The-Critical-Role-of-Disparate-Impact-in-AI-Accountability.pdf (last visited Mar. 20, 2026) (arguing that disparate-impact doctrine remains central to identifying and remedying algorithmic discrimination).
[14] FTC Surveillance Pricing Study Indicates Wide Range of Personal Data Used to Set Individualized Consumer Prices, Fed. Trade Comm’n (Jan. 17, 2025), available at https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-surveillance-pricing-study-indicates-wide-range-personal-data-used-set-individualized-consumer (last visited Mar. 20, 2026) (describing FTC findings on the use of personal data to inform individualized pricing); FTC & DOJ File Statement of Interest in Hotel Room Algorithmic Price-Fixing Case, Fed. Trade Comm’n (Mar. 28, 2024), available at https://www.ftc.gov/news-events/news/press-releases/2024/03/ftc-doj-file-statement-interest-hotel-room-algorithmic-price-fixing-case (last visited Mar. 20, 2026) (stating that firms may not use algorithms to evade antitrust law).