Photo of August Gweon

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration.  This blog describes key actions on AI taken by the Trump Administration in January 2025.

Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure

Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration.  On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.”  This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land.  Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI. 

On January 14, and in tandem with the release of EO 14141, the Office of Management and Budget (“OMB”) issued Memorandum M-25-03 on “Implementation Guidance for the Federal Data Center Enhancement Act,” directing federal agencies to implement requirements related to the operation of data centers by federal agencies or government contractors.  Specifically, the memorandum requires federal agencies to regularly monitor and optimize data center electrical consumption, including through the use of automated tools, and to arrange for assessments by certified specialists of data center energy and water usage and efficiency, among other requirements.  Like EO 14141, Memorandum M-25-03 has yet to be rescinded by the Trump Administration.

Trump White House Revokes President Biden’s 2023 AI Executive Order

On January 20, President Trump issued Executive Order 14148 on “Initial Recissions of Harmful Executive Orders and Actions,” revoking dozens of Biden Administration executive actions, including the October 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” (“2023 AI EO”).  To implement these revocations, Section 3 of EO 14148 directs the White House Domestic Policy Council (“DPC”) and National Economic Council (“NEC”) to “review all Federal Government actions” taken pursuant to the revoked executive orders and “take all necessary steps to rescind, replace, or amend such actions as appropriate.”  EO 14148 further directs the DPC and NEC to submit, within 45 days of the EO, lists of additional Biden Administration orders, memoranda, and proclamations that should be rescinded and “replacement orders, memoranda, or proclamations” to “increase American prosperity.”  Finally, EO 14148 directs National Security Advisor Michael Waltz to initiate a “complete and thorough review” of all National Security Memoranda (“NSMs”) issued by the Biden Administration and recommend NSMs for recission within 45 days of the EO. Continue Reading January 2025 AI Developments – Transitioning to the Trump Administration

On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.”  The RFI marks a first step toward the implementation of the Trump Administration’s January

Continue Reading Trump Administration Seeks Public Comment on AI Action Plan

On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence.  The new legislation comes just days after Chinese AI company DeepSeek

Continue Reading Senator Hawley Introduces Sweeping U.S.-China AI Decoupling Bill

On January 14, 2025, the Biden Administration issued an Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure” (the “EO”), with the goals of preserving U.S. economic competitiveness and access to powerful AI models, preventing U.S. dependence on foreign infrastructure, and promoting U.S. clean energy production to power the development and operation of AI.  Pursuant to these goals, the EO outlines criteria and timeframes for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy resources, by private-sector entities on federal land.  The EO builds upon a series of actions on AI issued by the Biden Administration, including the October 2023 Executive Order on Safe, Secure, and Trustworthy AI and an October 2024 AI National Security Memorandum.

I. Federal Sites for AI Data Centers & Clean Energy Facilities

The EO contains various requirements for soliciting and leasing federal sites for AI infrastructure, including:

The EO directs the Departments of Defense (“DOD”) and Energy (“DOE”) to each identify and lease, by the end of 2027, at least three federal sites to private-sector entities for the construction and operation of “frontier AI data centers” and “clean energy facilities” to power them (“frontier AI infrastructure”).  Additionally, the EO directs the Department of the Interior (“DOI”) to identify (1) federal sites suitable for additional private-sector clean energy facilities as components of frontier AI infrastructure, and (2) at least five “Priority Geothermal Zones” suitable for geothermal power generation.  Finally, the EO directs the DOD and DOE to publish a joint list of ten high-priority federal sites that are most conducive for nuclear power capacities that can be readily available to serve AI data centers by December 31, 2035.

  • Public Solicitations.  By March 31, 2025, the DOD and DOE must launch competitive, 30-day public solicitations for private-sector proposals to lease federal land for frontier AI infrastructure construction.  In addition to identifying proposed sides for AI infrastructure construction, solicitations will require applicants to submit detailed plans regarding:
  • Timelines, financing methods, and technical construction plans for the site;
  • Proposed frontier AI training work to occur on the site once operational;
  • Use of high labor and construction standards at the site; and
  • Proposed lab-security measures, including personnel and material access requirements, associated with the operation of frontier AI infrastructure.

The DOD and DOE must select winning proposals by June 30, 2025, taking into account effects on competition in the broader AI ecosystem and other selection criteria, including an applicant’s proposed financing and funding sources; plans for high-quality AI training, resource efficiency, labor standards, and commercialization of IP developed at the site; safety and security measures and capabilities; AI workforce capabilities; and prior experience with comparable construction projects.  Continue Reading Biden Administration Releases Executive Order on AI Infrastructure

This is the first blog in a series covering the Fiscal Year 2025 National Defense Authorization Act (“FY 2025 NDAA”).  This first blog will cover: (1) NDAA sections affecting acquisition policy and contract administration that may be of greatest interest to government contractors; (2) initiatives that underscore Congress’s commitment to strengthening cybersecurity, both domestically and internationally; and (3) NDAA provisions that aim to accelerate the Department of Defense’s adoption of AI and Autonomous Systems and counter efforts by U.S. adversaries to subvert them. 

Future posts in this series will address NDAA provisions targeting China, supply chain and stockpile security, the revitalized Administrative False Claims Act, and Congress’s effort to mature the Office of Strategic Capital and leverage private investment to accelerate the development of critical technologies and strengthen the defense industrial base.  Subscribe to our blog here so that you do not miss these updates.

FY 2025 NDAA Overview

On December 23, 2025, President Biden signed the FY 2025 NDAA into law.  The FY 2025 NDAA authorizes $895.2 billion in funding for the Department of Defense (“DoD”) and Department of Energy national security programs—a $9 billion or 1 percent increase over 2024.  NDAA authorizations have traditionally served as a reliable indicator of congressional sentiment on final defense appropriations. 

FY 2025 marks the 64th consecutive year in which an NDAA has been enacted, reflecting its status as “must-pass” legislation.  As in prior years, the NDAA has been used as a legislative vehicle to incorporate other measures, including the FY 2025 Department of State and Intelligence Authorization Acts, as well as provisions related to the Departments of Justice, Homeland Security, and Veterans Affairs, among others.

Below are select provisions of interest to companies across industries that engage in U.S. Government contracting, including defense contractors, technology providers, life sciences firms, and commercial-item suppliers.Continue Reading President Biden signs the National Defense Authorization Act for Fiscal Year 2025

The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level. 

Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI.  Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue.  And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.

In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation. 

At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking. 

This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025.  (Click here for our summary of the 2024 election implications on AI-related industrial policy and competition with China.)

The White House

As stated in the Republican Party’s 2024 platform and by the president-elect on the campaign trail, the incoming Trump Administration plans to revoke President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“2023 AI EO”).  The incoming administration also is expected to halt ongoing agency rulemakings related to AI, including a Department of Commerce rulemaking to implement the 2023 AI EO’s dual-use foundation model reporting and red-team testing requirements.  President-elect Trump’s intention to re-nominate Russell Vought as Director of the Office of Management and Budget (“OMB”) suggests that a light-touch approach to AI regulation may be taken across all federal agencies.  As OMB Director in the prior Trump Administration, Vought issued a memo directing federal agencies to “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”Continue Reading U.S. AI Policy Expectations in the Trump Administration, GOP Congress, and the States

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.      Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “AI EO”), issued by President Biden on October 30, 2023.  The first blog summarized the AI EO’s key provisions and

Continue Reading October 2024 Developments Under President Biden’s AI Executive Order

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.

Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:

Lower Thresholds for “High-Risk AI.”  Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act.  First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act.  Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

In the past several months, two state courts in the District of Columbia and California decided motions to dismiss in cases alleging that the use of certain revenue management software violated state antitrust laws in the residential property rental management and health insurance industries.  In both industries, parallel class actions

Continue Reading State Courts Dismiss Claims Involving the Use of Revenue Management Software in Residential Rental and Health Insurance Industries