On December 11, President Trump signed an Executive Order on “Ensuring a National Policy Framework for Artificial Intelligence” (“AI Preemption EO”), the culmination of months of efforts by Republican lawmakers to assert federal primacy over AI regulation.  The AI Preemption EO, which follows the release of a draft version in November, states that “[t]o win” the race “for supremacy” in AI, U.S. AI companies must be “free to innovate without cumbersome regulation” and that “excessive State regulation thwarts this imperative,” including state laws that “requir[e] entities to embed ideological bias within models” and “impermissibly regulate beyond [s]tate borders.”  To address these concerns, the AI Preemption EO states that the Trump Administration “must act with the Congress to ensure that there is a minimally burdensome national standard,” which must “ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.”  However, the AI Preemption EO states that, “[u]ntil such a national standard exists,” the Administration has an “imperative” to “check the most onerous and excessive [state AI] laws.”  On December 8, prior to issuing the AI Preemption EO, President Trump stated that there “must be only One Rulebook if we are going to continue to lead on AI,” and that the involvement of states in AI regulation will “destroy[]” U.S. AI innovation “in its infancy.” 

To implement its policy of “sustain[ing] and enhance[ing] the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the AI Preemption EO directs White House officials and federal agencies to take various steps to either preempt directly where possible, or otherwise challenge, state AI laws as preempted by existing federal laws or regulations:

AI Litigation Task Force.  The AI Preemption EO directs the Attorney General to establish an “AI Litigation Task Force” with the “sole responsibility” of challenging state AI laws that unconstitutionally regulate interstate commerce, are preempted by federal regulations, or are otherwise unlawful “in the Attorney General’s judgment,” including laws identified as “onerous” state AI laws in evaluations published by the Commerce Secretary. 

On December 8, the White House Special Advisor for AI and Crypto, David Sacks, indicated that a potentially wide range of state AI laws may be subject to challenge under the AI Litigation Task Force’s mandate, stating that, when “an AI model is developed in state A, trained in state B, inferenced in state C, and delivered over the internet through national telecommunications infrastructure,” it is “clearly interstate commerce . . . reserve[d] for the federal government to regulate.”  On the other hand, according to Sacks, AI preemption would not apply to “generally applicable” state laws—such as those prohibiting or penalizing child sexual abuse material (CSAM)—or to local decisions regarding the construction of AI data centers. 

The decision to issue an EO preempting state laws in piecemeal fashion, and to pursue litigation and regulation to challenge potentially conflicting state laws, suggests that Congress is unlikely to act soon to preempt state AI regulations and replace them with uniform national standards.  While the AI Preemption EO may ultimately overturn some state AI laws or chill states from adopting new ones, the AI Preemption EO’s reliance on disparate federal agencies and authorities—most of which are not AI-specific—may increase uncertainty as to what rules govern the development, deployment, and use of AI in the short term.  In the meantime, states will generally be free to continue enacting and enforcing new AI laws.

Evaluation of State AI Laws.  The AI Preemption EO directs the Commerce Secretary, in consultation with White House officials, to publish an evaluation of state AI laws that identifies “onerous laws” that conflict with the AI Preemption EO’s policy and state AI laws that should be referred to the AI Litigation Task Force.  Consistent with President Trump’s July 23 Executive Order on “Preventing Woke AI in the Federal Government,” the AI Preemption EO requires this evaluation to identify, “at minimum,” state AI laws that “require AI models to alter truthful outputs” or that “may compel” AI developers or deployers to disclose or report information in violation of First Amendment or other constitutional rights.  Additionally, the AI Preemption EO provides that the evaluation may identify state AI laws that “promote AI innovation consistent with the policy” of the AI Preemption EO.

Although it is unclear what other state AI laws may be considered “onerous” under this provision, the White House Office of Science and Technology Policy (OSTP)’s September 26 Request for Information on federal AI regulatory reform outlined five categories of “barriers” that hinder AI development, deployment, and adoption and that may inform the AI Preemption EO’s implementation: (1) regulations that are “based on human-centered assumptions”; (2) regulations that “assume human actors”; (3) regulations that lack sufficient “regulatory clarity” regarding their application to AI; (4) regulations that “directly target” and “are a major hindrance” to AI; and (5) regulations that are inconsistently enforced due to “organizational factors.”

Funding Restrictions for States with AI Laws.  Similar to the approach of the moratorium on state and local AI laws that was overwhelmingly rejected by the Senate in July, the AI Preemption EO directs the Commerce Secretary to issue a Policy Notice specifying conditions of state eligibility for Broadband Equity Access and Deployment (BEAD) funds, which must specify that states are ineligible for such funds if they have “onerous AI laws,” as identified by the Commerce Secretary above, and describe how a “fragmented State [AI] regulatory landscape” may undermine the purpose and mission of BEAD funding, including the “growth of AI applicants reliant on high-speed networks” and BEAD’s “mission of delivering universal, high-speed connectivity.”  

Additionally, the AI Preemption EO directs federal agencies to take “immediate steps” to determine whether to condition their discretionary grant programs on states “not enacting an AI law that conflicts with the policy” of the AI Preemption EO, including “onerous state AI laws” identified by the Commerce Secretary or state AI laws challenged by the AI Litigation Task Force.  Agencies must also consider whether, “for those States that have enacted such [AI] laws,” to condition discretionary grants on such states “entering into a binding agreement” with the agency “not to enforce any such [AI] laws” for the performance period of the grant.

FCC Federal AI Reporting and Disclosure Standard.  Consistent with its policy of establishing a “uniform national policy framework for AI,” the AI Preemption EO directs the Chair of the Federal Communications Commission (FCC) to “initiate a proceeding” on adopting a “Federal reporting and disclosure standard for AI models” that “preempts conflicting State laws.”  Although not stated explicitly, this provision may be intended to preempt state laws like California’s Transparency in Frontier AI Act, which establishes AI developer reporting and disclosure obligations that the draft version of the AI Preemption EO described as “complex and burdensome.”  This provision also echoes recommendations in President Trump’s July 23 AI Action Plan, which called on the FCC to evaluate whether state AI regulations may be preempted under the Communications Act. 

FTC Section 5 Preemption Policy Statement.  The AI Preemption EO directs the Chair of the Federal Trade Commission (FTC) to issue a policy statement on the “application of the FTC Act’s prohibition on unfair and deceptive acts or practices” under Section 5 of the FTC Act “to AI models.”  Similar to language in the President’s Woke AI Executive Order, the AI Preemption EO requires the FTC policy statement on AI models to specifically explain where “State laws that require alterations to the truthful outputs of AI models” may be preempted by Section 5’s prohibition on deceptive acts or practices.  This provision appears intended to challenge the Colorado AI Act, a 2024 law that imposes various governance requirements for developers and deployers of “high-risk AI systems” in order to minimize risks of algorithmic discrimination.  The AI Preemption EO’s statement of purpose argues that the Colorado AI Act could “force AI models” to “produce false results in order to avoid a ‘differential treatment or impact’” on the basis of protected characteristics. 

Legislative Recommendations for Federal AI Framework.  The AI Preemption EO directs the White House Special Advisor for AI and Crypto and the Office of Legislative Affairs to jointly prepare a “legislative recommendation establishing a uniform Federal policy framework for AI that preempts state AI laws” that conflict with the AI Preemption EO’s policy of “global AI dominance through a minimally burdensome, uniform national policy framework for AI.”  The AI Preemption EO further provides that this legislative recommendation “shall not” preempt “otherwise lawful” state AI laws that relate to (1) child safety protections, (2) AI compute and data center infrastructure “other than generally applicable permitting reforms,” (3) state government procurement and use of AI, or (4) “other topics as shall be determined.” 

While the substance of the AI Preemption EO’s contemplated “legislative recommendation” is not specified, any future proposed federal AI framework legislation could be informed by a growing number of AI legislative proposals that have emerged in Congress in recent years.  For example, the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act, introduced by Senate Commerce Committee Chair Ted Cruz (R) in September, would allow sandbox program participants to request waivers or modifications to federal regulations to enable the deployment of AI tools.  Additionally, the Safeguarding Adolescents From Exploitative (SAFE) Bots Act (H.R. 6489), introduced by Representatives Erin Houchin (R-IN) and Jake Auchincloss (D-MA) on December 5, would preempt state AI laws that “cover[] a matter described” in that bill’s AI chatbot safety provisions for minors. 

The AI Preemption EO is the most decisive step taken by the White House to date to halt an expanding array of state AI laws.  In recent years, state lawmakers in both parties have enacted dozens of new AI laws, from frontier model public safety regulations and AI consumer protection laws to chatbot safeguards and bans on harmful AI-generated deepfakes and nonconsensual impersonations.  The AI Preemption EO also could be subject to legal challenges from State Attorneys General.  On December 8, California Attorney General Rob Bonta (D) stated that his office would “take steps to examine the legality or potential illegality” of the AI Preemption EO, and Florida Governor Ron DeSantis (R), who recently proposed an “AI Bill of Rights” to protect Florida consumers, stated that an “executive order doesn’t/can’t preempt state legislative action.”

The issuance of the AI Preemption EO follows a series of legislative efforts to preempt state AI laws that have stalled in Congress.  In July, the Senate rejected, 99-1, a proposed budget reconciliation bill amendment that would have imposed a sweeping moratorium on the enforcement of state and local AI regulations.  And earlier this month, Republican congressional leaders abandoned an attempt to include an AI preemption provision in the National Defense Authorization Act (NDAA), despite the backing of the White House. 

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.