Artificial Intelligence (AI)

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.

Transparency Requirements.  The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.”  Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.

Third-Party Risk Assessments.  Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.”  To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties. 

Whistleblower Protections.  Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers.  The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

Since 2020, over 60 bills have been introduced in the Mexican Congress seeking to regulate artificial intelligence (AI). In the absence of general AI legal framework, these bills have sought to regulate a broad range of issues, including governance, education, intellectual property, and data protection. Mexico lacks a comprehensive national

Continue Reading New Artificial Intelligence Legislation in Mexico

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  The first blog summarized key actions taken in the first weeks of the Trump Administration, including the revocation of President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” and the release of President Trump’s Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (“AI EO”).  This blog describes actions on AI taken by the Trump Administration in February 2025.

White House Issues Request for Information on AI Action Plan

On February 6, the White House Office of Science & Technology Policy (“OSTP”) issued a Request for Information (“RFI”) seeking public input on the content that should be in the White House’s yet-to-be-issued AI Action Plan.  The RFI marks the Trump Administration’s first significant step in implementing the very broad goals in the January 2025 AI EO, which requires Assistant to the President for Science & Technology Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to develop an “action plan” to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”  The RFI states that the AI Action Plan will “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.”

Specifically, the RFI seeks public comment on the “highest priority policy actions” that should be included in the AI Action Plan and encourages respondents to recommend “concrete” actions needed to address AI policy issues.  While noting that responses may “address any relevant AI policy topic,” the RFI provides 20 topics for potential input.  These topics are general and do not include specific questions or areas where particular input is needed.  The topics include: hardware and chips, data centers, energy consumption and efficiency, model and open-source development, data privacy and security, technical and safety standards, national security and defense, intellectual property, procurement, and export controls.  As of March 13, over 325 comments on the AI Action Plan have been submitted.  The public comment period ends on March 15, 2025.  Under the EO, the finalized AI Action Plan must be submitted to the President by mid-October of 2025.Continue Reading February 2025 AI Developments Under the Trump Administration

Since taking office, President Trump has issued dozens of executive orders, many addressing key technology policy areas that include international trade and investment, artificial intelligence (AI),  connected vehicles and drones, and trade controls.  Some of these executive actions reverse the previous administration’s efforts on these issues—such as the order revoking President Biden’s October 2023 executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—and others initiate a formal review process, suggesting the Trump Administration will preserve, and perhaps strengthen or enhance, key tech policies implemented by the Biden Administration and the first Trump term.  

Several of the executive actions President Trump has taken so far offer important opportunities for stakeholders to weigh in with Executive Branch agencies as they consider next steps, including whether to revoke, expand, or retain tech policies initiated under President Biden. Key initiatives include: 

America First Trade Policy

The President’s America First Trade Policy memorandum, issued on January 20, directs certain federal agencies to review policies issued by the Biden Administration.  The memo does not provide specifically for public comment opportunities with respect to these policy reviews, but it provides insight into how the Administration may modify Biden Administration policy actions.  We recommend that interested stakeholders engage to share their views with the Administration.  Three critical areas in particular will affect stakeholders across tech industries:

  • China and Intellectual Property: Section 3(e) of the memo directs the Commerce Secretary to the assess the status of United States intellectual property rights such as patents, copyrights, and trademarks conferred upon PRC persons” and to “make recommendations to ensure reciprocal and balanced treatment of intellectual property rights with the PRC.” 
  • Connected Vehicles:  Section 4(d) The memo directs the Commerce Secretary to “review and recommend appropriate action with respect to the rulemaking by the Office of Information and Communication Technology and Services (ICTS) on connected vehicles.”  The memo specifically directs the Secretary to consider whether ICTS controls should be “expanded to account for additional connected products.”
  • Outbound Investment:  Section 4(e) of the President’s memo directs the Treasury Secretary, in consultation with the Commerce Secretary, to review whether President Biden’s outbound executive order “should be modified or rescinded and replaced,” and to “assess whether the [Treasury Department outbound investment regulation] includes sufficient controls to address national security threats.”  This review dovetails with the President’s America First Investment Policy memo, issued on February 21, which equates U.S. national security and U.S. economic security, and directs agencies to streamline regulatory reviews to promote foreign investment in the United States.

Continue Reading Flurry of Trump Administration Executive Orders Shakes Up Tech Policy, Creates Industry Opportunities

On January 29 – 31, 2025, Covington convened authorities from across our practice groups for the Sixth Annual Technology Forum, which explored recent global developments affecting businesses that develop, deploy, and use cutting-edge technologies. Seventeen Covington attorneys discussed global regulatory trends and forecasts relevant to these industries, highlights of which are captured below.  Please click here to access any of the segments from the 2025 Tech Forum.

Day 1: What’s Happening Now in the U.S. & Europe

Early Days of the New U.S. Administration

Covington attorney Holly Fechner and Covington public policy authority Bill Wichterman addressed how the incoming administration has signaled a shift in technology policy, with heightened scrutiny on Big Tech, AI, cryptocurrency, and privacy regulations. A new Executive Order on AI aims to remove barriers to American leadership in AI, while trade controls and outbound investment restrictions seek to strengthen national security in technology-related transactions. Meanwhile, the administration’s approach to decoupling from China is evolving, with stricter protectionist measures replacing prior subsidy-based initiatives.

Cross-Border Investment

Covington attorney Jonathan Wakely discussed the role of ongoing geopolitical tensions in shaping cross-border investment policies, particularly in technology-related transactions. He noted that the Committee on Foreign Investment in the United States (CFIUS) remains aggressive in reviewing deals that could pose China-related risks. The new Outbound Investment Rule introduces restrictions on U.S. persons investing in Chinese companies engaged in certain AI, quantum computing, and semiconductor activities.

Updates on European Tech Regulation

Covington attorneys Sam Choi and Bart Szewczyk explained how, in light of the Draghi Report on European competitiveness and growing geopolitical pressures, the European Commission is planning to focus on “European competitiveness” in this term. The European Commission has announced plans to increase investments into its tech sectors, and find ways to ease the regulatory burden on companies. It is expected that the EU will focus on implementing, and potentially streamlining, its existing tech regulatory regime – rather than adopting new tech regulations that will impose added obligations on companies. The EU already has in place a robust regulatory regime covering privacy, cybersecurity, competition, data sharing, online platforms, and AI. In 2025, the recently adopted AI Act and the Data Act will start to apply, so companies should prepare for their implementation.  Continue Reading Covington Technology Forum Spotlight – The Great Race: Keeping Up as Technology and Regulation Rapidly Evolve

On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.  SB 53 proposes a significantly narrower approach compared to

Continue Reading California Senator Introduces AI Safety Bill

Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.

  • Consumer Protection.  Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act.  In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general.  They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system.  For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
  • Sector-Specific Automated Decision-makingLawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance.  For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance.  Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General.  Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT.  For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.

Continue Reading Blog Post: State Legislatures Consider New Wave of 2025 AI Legislation

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.

Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:

  1. Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
  2. Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
  3. Transparency reporting on advanced AI systems – includes questions regarding, among others,  reports and technical documentation and transparency practices.
  4. Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
  5. Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
  6. Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
  7. Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.

Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices

Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips.  Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January.  DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.

The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials.  These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers.  What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security.  

Federal Government-Wide Responses

  • Bi-Partisan Bill to Ban DeepSeek from Government Devices:  On February 7,Representatives Gottheimer (D-NJ-5) and LaHood (R-IL-16) introduced the No DeepSeek on Government Devices Act (HR 1121).  Reps. Gottheimer and LaHood, who both serve on the House Permanent Select Committee on Intelligence, each issued public statements pointing to grave and deeply held national security concerns regarding DeepSeek.  Rep. Gottheimer has stated that “we have deeply disturbing evidence that [the Chinese Communist Party (“CCP”) is] using DeepSeek to steal the sensitive data of U.S. citizens,” calling DeepSeek “a five-alarm national security fire.”  Representative LaHood stated that “[u]nder no circumstances can we allow a CCP company to obtain sensitive government or personal data.”

While the details of the bill have not yet been unveiled, any future DeepSeek prohibition could be extended by the FAR Council to all federal contractors and may not exempt commercial item contracts under FAR Part 12 or contracts below the simplified acquisition (or even the micro-purchase) threshold, similar to other bans in this sector.  Notably, such a prohibition may leave contractors with questions about the expected scope of implementation, including the particular devices that are covered.Continue Reading U.S. Federal and State Governments Moving Quickly to Restrict Use of DeepSeek

Executive Summary

  • Artificial intelligence (AI), social media, and instant messaging regulation will be a hot topic in Brazil in 2025, with substantial activity in Congress and the Supreme Court.
  • Cloud, cybersecurity, data centers, and data privacy are topics that could also see legislative or regulatory action throughout the year at different policymaking stages.
  • Technology companies will also be affected by horizontal and sector-specific tax policy-related measures, and Brazil’s digital policy might be impacted by U.S.-Brazil relations under the new Trump administration.

Analysis

2025 is shaping up to be a key year for digital policymaking in Brazil.  It is the last year for President Luiz Inácio Lula da Silva’s administration to pursue substantial policy change before the 2026 general elections.  It is also the first year for the new congressional leadership, in particular the new Speaker of the House and President of the Senate, to put their stamp on key legislation before their own reelection campaigns next year.

Existing Legal Framework: LGT, MCI and LGPD

Brazil’s current approach to digital policy is based on three key federal statutes.  The first one is the General Telecommunications Act of 1997 (“LGT”).  LGT established the rules for the country’s transition from a state-owned monopoly to a competitive, private sector-led telecommunications market.  It is the bedrock of Brazil’s digital economy infrastructure regulation as, among other aspects, it sets rules for radio spectrum and orbit uses.

The second key statute is the Civil Rights Framework for the Internet Act of 2014 (“MCI”).  MCI sets the principles, rights and obligations for internet use, including the net neutrality principle and a safe harbor clause protecting internet service providers from liability for user-generated content absent a court order to remove the content.  The statute also established the first layer of data privacy provisions as well as rules for the federal, state, and local governments internet-related policies and actions.

The third key federal statute is the General Personal Data Protection Act of 2018 (“LGPD”).  LGPD sets rules for the treatment of personal data by individuals, companies, state-owned and state-supported enterprises, and governments.  It slightly amends MCI and adds a more robust layer of data privacy protection.

Each statute has its own regulator, respectively the National Telecommunications Agency (“ANATEL”), Brazil’s Internet Management Committee (“CGI.br”), and the National Data Protection Authority (“ANPD”).

Hot Topics in 2025: AI, Social Media, and Instant Messaging

Two agenda items will likely dominate the policy debate in Brazil in 2025.  The first one is the creation of a new legal framework for AI.  After years of intense debate, the Senate approved its AI bill in December 2024.  The bill sets rights and obligations for developers, deployers, and distributors of AI systems, and takes a human rights, risk management, and transparency approach to regulating AI-related activity.  It also contains contentious provisions establishing AI-related copyright obligations.  In 2025, the House will likely debate and try to approve the bill, which is also a priority for the Lula administration.Continue Reading Brazil’s Digital Policy in 2025: AI, Cloud, Cyber, Data Centers, and Social Media