House Republicans have passed through committee a nationwide, 10-year moratorium on the enforcement of state and local laws and regulations that impose requirements on AI and automated decision systems. The moratorium, which would not apply to laws that promote AI adoption, highlights the widening gap between a wave of new
Continue Reading House Republicans Push for 10-Year Moratorium on State AI Laws
August Gweon
August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.
August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.
April 2025 AI Developments Under the Trump Administration
This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration. This blog describes AI actions taken by the Trump Administration in April 2025, and prior articles in this series are available here.
White House OMB Issues AI Use & Procurement Requirements for Federal Agencies
On April 3, the White House Office of Management & Budget (“OMB”) issued two memoranda on the use and procurement of AI by federal agencies: Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”). The two memos partially implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence,” which, among other things, directs OMB to revise the Biden OMB AI Memos to align with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.” The OMB AI Use Memo outlines agency governance and risk management requirements for the use of AI, including AI use case inventories and generative AI policies, and establishes “minimum risk management practices” for “high-impact AI use cases.” The OMB AI Procurement Memo establishes requirements for agency AI procurement, including preferences for AI “developed and produced in the United States” and contract terms to protect government data and prevent vendor lock-in. According to the White House’s fact sheet, the OMB Memos, which rescind and replace AI use and procurement memos issued under President Biden’s Executive Order 14110, shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”
Department of Energy Announces Federal Sites for AI Data Center Construction
On April 3, the Department of Energy (“DOE”) issued a Request for Information (“RFI”) on AI Infrastructure on federal lands owned or managed by DOE. The RFI seeks comment from “entities with experience in the development, operation, and management of AI infrastructure,” along with other stakeholders, on a range of topics, including potential data center designs, technologies, and operational models, potential power needs and timelines for data centers, and related financial or contractual considerations. As part of the RFI, DOE announced 16 potential DOE sites for “rapid [AI] data center construction,” with the goal of initiating data center construction by the end of 2025 and commencing data center operation by the end of 2027 through public-private partnerships. The comment period for the RFI closed on May 7, 2025.
President Trump Issues Executive Order on Coal-Powered AI Infrastructure
On April 8, President Trump issued Executive Order 14261, titled “Reinvigorating America’s Beautiful Clean Coal Industry,” directing the Departments of Agriculture, Energy, and the Interior to identify coal resources and reserves on Federal lands for mining by public or private actors, prioritize and expedite leases for coal mining on Federal lands, and rescind regulations that discourage investments in coal production, among other things. The Executive Order also directs the Departments of Commerce, Energy, and the Interior to identify regions with suitable coal-powered infrastructure for AI data centers, assess the potential for expanding coal-powered infrastructure to meet AI data center electricity needs, and submit a report of findings and proposals to the White House National Energy Dominance Council, Assistant to the President for Science & Technology, and Special Advisor for AI and Crypto by June 7, 2025.
House CCP Committee Releases Report on DeepSeek Concerns
On April 16, the House Select Committee on the Chinese Communist Party released its report on DeepSeek and its AI platform, titled DeepSeek Unmasked: Exposing the CCP’s Latest Tool for Spying, Stealing, and Subverting U.S. Export Control Restrictions. Stating that DeepSeek “represents a profound threat to our nation’s security,” the report found that DeepSeek sends U.S. data to the Chinese government and manipulates chatbot outputs to “align with the CCP’s ideological and political objectives.” The report also found that it was “highly likely” that DeepSeek used model distillation techniques to extract reasoning outputs and copy leading U.S. AI model capabilities in order to expedite development. The report further found that DeepSeek violated U.S. semiconductor export controls. The report called on the U.S. to expand export controls and improve enforcement, in addition to preparing for “strategic surprise” arising from rapid advancements in Chinese AI. Ultimately, the report may help to accelerate possible U.S. Government bans on DeepSeek along the lines of the Kansas ban discussed below.Continue Reading April 2025 AI Developments Under the Trump Administration
U.S. Tech Legislative & Regulatory Update – First Quarter 2025
This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain.
I. Artificial Intelligence
I. Federal Legislative Developments
In the first quarter, members of Congress introduced several AI bills addressing…
Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025March 2025 AI Developments Under the Trump Administration
This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration. This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.
White House Receives Public Comments on AI Action Plan
On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6. As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”
The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies. The final AI Action Plan is expected by July of 2025.
NIST Launches New AI Standards Initiatives
The National Institute of Standards & Technology (“NIST”) announced several AI initiatives in March to advance AI research and the development of AI standards. On March 19, NIST launched its GenAI Image Challenge, an initiative to evaluate generative AI “image generators” and “image discriminators,” i.e., AI models designed to detect if images are AI-generated. NIST called on academia and industry research labs to participate in the challenge by submitting generators and discriminators to NIST’s GenAI platform.
On March 24, NIST released its final report on Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST AI 100-2e2025, with voluntary guidance for securing AI systems against adversarial manipulations and attacks. Noting that adversarial attacks on AI systems “have been demonstrated under real-world conditions, and their sophistication and impacts have been increasing steadily,” the report provides a taxonomy of AI system attacks on predictive and generative AI systems at various stages of the “machine learning lifecycle.”
On March 25, NIST announced the launch of an “AI Standards Zero Drafts project” that will pilot a new process for creating AI standards. The new standards process will involve the creation of preliminary “zero drafts” of AI standards drafted by NIST and informed by rounds of stakeholder input, which will be submitted to standards developing organizations (“SDOs”) for formal standardization. NIST outlined four AI topics for the pilot of the Zero Drafts project: (1) AI transparency and documentation about AI systems and data; (2) methods and metrics for AI testing, evaluation, verification, and validation (“TEVV”); (3) concepts and terminology for AI system designs, architectures, processes, and actors; and (4) technical measures for reducing synthetic content risks. NIST called for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, with no set deadline for submitting responses.Continue Reading March 2025 AI Developments Under the Trump Administration
OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies
On April 3, the White House Office of Management and Budget (“OMB”) released two memoranda with AI guidance and requirements for federal agencies, Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”). According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos (collectively, the “new OMB AI Memos”), which rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 (“Biden OMB AI Memos”), shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.” The new OMB AI Memos implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (the “AI EO”), which directs the OMB to revise the Biden OMB AI Memos to make them consistent with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”
Overall, the new OMB AI Memos build on the frameworks established under President Trump’s 2020 Executive Order 13960 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and the Biden OMB AI Memos. This is consistent with the AI EO, which noted that the Administration would “revise” the Biden AI Memos “as necessary.” At the same time, the new OMB AI Memos include some significant differences from the Biden OMB’s approach in the areas discussed below (as well as other areas).
- Scope & Definitions. The OMB AI Use Memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies,” with certain exclusions for the Intelligence Community and the Department of Defense. The memo defines “AI” by reference to Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019. Like the Biden OMB AI Memos, the OMB AI Use Memo states that “no system should be considered too simple to qualify as covered AI due to a lack of technical complexity.”
The OMB AI Procurement Memo applies to “AI systems or services that are acquired by or on behalf of covered agencies,” excluding the Intelligence Community, and includes “data systems, software, applications, tools, or utilities” that are “established primarily” for researching, developing, or implementing AI or where an “AI capability” is integrated into another process, operational activity, or technology system. The memo excludes AI that is “embedded” in “common commercial products” that are widely available for commercial use and have “substantial non-AI purposes or functionalities,” along with AI “used incidentally by a contractor” during contract performance. In other words, the policies are targeted at software that is primarily used for its AI capabilities, rather than on software that happens to incorporate AI.
Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”
On March 24, the Senate Judiciary Subcommittee on the Constitution held a hearing on the “Censorship Industrial Complex,” where senators and witnesses expressed divergent views on risks to First Amendment rights. Senator Eric Schmitt (R-MO), the Subcommittee Chair, began the hearing by warning that the “vast censorship enterprise that the…
Continue Reading Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”California Frontier AI Working Group Issues Report on Foundation Model Regulation
On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.” The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047). The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.
Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.
Transparency Requirements. The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.” Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.
Third-Party Risk Assessments. Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.” To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties.
Whistleblower Protections. Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers. The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation
February 2025 AI Developments Under the Trump Administration
This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration. The first blog summarized key actions taken in the first weeks of the Trump Administration, including the revocation of President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” and the release of President Trump’s Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (“AI EO”). This blog describes actions on AI taken by the Trump Administration in February 2025.
White House Issues Request for Information on AI Action Plan
On February 6, the White House Office of Science & Technology Policy (“OSTP”) issued a Request for Information (“RFI”) seeking public input on the content that should be in the White House’s yet-to-be-issued AI Action Plan. The RFI marks the Trump Administration’s first significant step in implementing the very broad goals in the January 2025 AI EO, which requires Assistant to the President for Science & Technology Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to develop an “action plan” to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The RFI states that the AI Action Plan will “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.”
Specifically, the RFI seeks public comment on the “highest priority policy actions” that should be included in the AI Action Plan and encourages respondents to recommend “concrete” actions needed to address AI policy issues. While noting that responses may “address any relevant AI policy topic,” the RFI provides 20 topics for potential input. These topics are general and do not include specific questions or areas where particular input is needed. The topics include: hardware and chips, data centers, energy consumption and efficiency, model and open-source development, data privacy and security, technical and safety standards, national security and defense, intellectual property, procurement, and export controls. As of March 13, over 325 comments on the AI Action Plan have been submitted. The public comment period ends on March 15, 2025. Under the EO, the finalized AI Action Plan must be submitted to the President by mid-October of 2025.Continue Reading February 2025 AI Developments Under the Trump Administration
California Senator Introduces AI Safety Bill
On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law. SB 53 proposes a significantly narrower approach compared to…
Continue Reading California Senator Introduces AI Safety BillBlog Post: State Legislatures Consider New Wave of 2025 AI Legislation
Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges
State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025. As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation. Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.
- Consumer Protection. Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act. In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general. They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system. For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
- Sector-Specific Automated Decision-making. Lawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance. For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance. Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General. Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT. For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.
Continue Reading Blog Post: State Legislatures Consider New Wave of 2025 AI Legislation