Artificial Intelligence (AI)

In September, FTC Chairman Andrew Ferguson called for the FTC to regulate artificial intelligence claims through its existing consumer protection authorities:  “Imposing comprehensive regulations at the incipiency of a potential technological revolution would be foolish.  For now, we should limit ourselves to enforcing existing laws against illegal conduct when it

Continue Reading FTC Challenges Deceptive Artificial Intelligence Claims

AI chatbots are transforming how businesses handle consumer inquiries and complaints, offering speed and availability that traditional channels often cannot match.  However, the European Commission’s recent Digital Fairness Act Fitness Check has spotlighted a gap: EU consumers currently lack a cross-sectoral right to demand human contact when interacting with AI chatbots in business-to-consumer settings.  It is still unclear whether and how the European Commission is proposing to address this.  The Digital Fairness Act could do so, but the Commission’s proposal is only planned to be published in the 3rd quarter of 2026.  This post highlights key consumer protection considerations for companies deploying AI chatbots in the EU market.

AI Chatbots Cannot Be the Only Contact Channel

Under EU law–particularly the Consumer Rights Directive (“CRD”) and the eCommerce Directive–consumers must have access to traditional communication channels such as the trader’s postal address, telephone number, and email address.  The Court of Justice of the EU has made clear that consumers must be able to contact traders directly, quickly, and effectively (Case C-649/17).  While chatbots can assist, they cannot replace mandatory human contact options.

AI Chatbots as Supplementary Communication Channels

The CRD requires traders to disclose their primary contact details before concluding a contract, but does not prohibit offering AI chatbots as additional communication tools.  Where chatbots enable consumers to retain durable records of their interactions – including timestamps – traders should inform consumers about that.  Durable records are defined as information stored in a medium accessible and unalterable for future reference, such as emails or downloadable files.

In any event, certain communications, such as the acknowledgment of a consumer’s right of withdrawal, must be provided in a “durable medium,” ensuring consumers have a stable and accessible record of important contractual information.

Human Oversight and the Right to Human InterventionContinue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions

On May 14, 2025, Covington convened experts across our practice groups for the Fourth Annual Covington Robotics Forum to explore the legal and regulatory risks and opportunities impacting robotics, AI, and connected devices. Eight Covington attorneys discussed global forecasts relevant to these spaces in a highly concentrated 90-minute session, culminating in an Industry Spotlight moderated by Covington partner Nick Evoy featuring Casey Campbell, Deputy General Counsel and Chief Intellectual Property Counsel at Figure AI. Highlights from the Forum are captured below.

AI & Robotics in the Workplace

Covington attorneys Carolyn Rashby and Anna Oberschelp de Meneses addressed key considerations for companies implementing AI tools. In the U.S., though no federal laws specifically address robotics or the use of AI in employment, employers must still comply with preexisting federal laws, like Title VII and FCRA. Conversely, various states and localities are creating legislation specifically aimed at these topics, such as New York City’s Local Law 144, which regulates employer usage of automated employment decision tools. Similarly, a patchwork of rules exists in the EU, requiring companies to monitor both EU-level regulations and directives, as well as member state-specific laws. Recommended best practices for employers seeking to utilize AI tools and robotics in the workplace include reviewing for, and mitigating potential bias in, AI vendors and tools, maintaining human oversight, and instituting ongoing training and compliance measures.

Product Safety, Product Liability & Risks

Covington attorneys Joshua González and Daniel Auten addressed key considerations for product safety and product liability in robotics. They identified robotics and AI as some of the most actively transforming spaces within product liability law today, highlighting a recent case which found that both a manufacturer of a robotics device and the software developer could be subject to product liability claims. Key defenses in robotics-related product liability suits may include asserting federal or state preemption, arguing for lack of proximate causation, and importantly, pre-planned contractual defenses and indemnifications. On the regulatory side, the CPSC and NHTSA have hosted a number of information gathering meetings on robotics, and will likely continue to issue relevant reports and monitor industry standards. Recommendations for companies in this space include developing strategies for eventual regulatory engagement, monitoring any enforcement activities, and staying abreast of regulatory obligations, such as reporting requirements.Continue Reading Covington Robotics Forum Spotlight – Enhanced Autonomy: Strategies to Navigate New Regulations, Risks & Opportunities

This is the third blog in a series of Covington blogs on cybersecurity policies, executive orders (“EOs”), and other actions of the new Trump Administration.  This blog describes key cybersecurity developments that took place in April 2025. 

NIST Publishes Initial Draft of Guidance for High Performance Computing Systems

U.S. National

Continue Reading April 2025 Cybersecurity Developments Under the Trump Administration

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other

Continue Reading European Commission Publishes Q&A on AI Literacy

House Republicans have passed through committee a nationwide, 10-year moratorium on the enforcement of state and local laws and regulations that impose requirements on AI and automated decision systems.  The moratorium, which would not apply to laws that promote AI adoption, highlights the widening gap between a wave of new

Continue Reading House Republicans Push for 10-Year Moratorium on State AI Laws

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in April 2025, and prior articles in this series are available here.

White House OMB Issues AI Use & Procurement Requirements for Federal Agencies

On April 3, the White House Office of Management & Budget (“OMB”) issued two memoranda on the use and procurement of AI by federal agencies: Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  The two memos partially implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence,” which, among other things, directs OMB to revise the Biden OMB AI Memos to align with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”  The OMB AI Use Memo outlines agency governance and risk management requirements for the use of AI, including AI use case inventories and generative AI policies, and establishes “minimum risk management practices” for “high-impact AI use cases.”  The OMB AI Procurement Memo establishes requirements for agency AI procurement, including preferences for AI “developed and produced in the United States” and contract terms to protect government data and prevent vendor lock-in.  According to the White House’s fact sheet, the OMB Memos, which rescind and replace AI use and procurement memos issued under President Biden’s Executive Order 14110, shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”

Department of Energy Announces Federal Sites for AI Data Center Construction

On April 3, the Department of Energy (“DOE”) issued a Request for Information (“RFI”) on AI Infrastructure on federal lands owned or managed by DOE.  The RFI seeks comment from “entities with experience in the development, operation, and management of AI infrastructure,” along with other stakeholders, on a range of topics, including potential data center designs, technologies, and operational models, potential power needs and timelines for data centers, and related financial or contractual considerations.  As part of the RFI, DOE announced 16 potential DOE sites for “rapid [AI] data center construction,” with the goal of initiating data center construction by the end of 2025 and commencing data center operation by the end of 2027 through public-private partnerships.  The comment period for the RFI closed on May 7, 2025.

President Trump Issues Executive Order on Coal-Powered AI Infrastructure

On April 8, President Trump issued Executive Order 14261, titled “Reinvigorating America’s Beautiful Clean Coal Industry,” directing the Departments of Agriculture, Energy, and the Interior to identify coal resources and reserves on Federal lands for mining by public or private actors, prioritize and expedite leases for coal mining on Federal lands, and rescind regulations that discourage investments in coal production, among other things.  The Executive Order also directs the Departments of Commerce, Energy, and the Interior to identify regions with suitable coal-powered infrastructure for AI data centers, assess the potential for expanding coal-powered infrastructure to meet AI data center electricity needs, and submit a report of findings and proposals to the White House National Energy Dominance Council, Assistant to the President for Science & Technology, and Special Advisor for AI and Crypto by June 7, 2025.

House CCP Committee Releases Report on DeepSeek Concerns

On April 16, the House Select Committee on the Chinese Communist Party released its report on DeepSeek and its AI platform, titled DeepSeek Unmasked: Exposing the CCP’s Latest Tool for Spying, Stealing, and Subverting U.S. Export Control Restrictions.  Stating that DeepSeek “represents a profound threat to our nation’s security,” the report found that DeepSeek sends U.S. data to the Chinese government and manipulates chatbot outputs to “align with the CCP’s ideological and political objectives.”  The report also found that it was “highly likely” that DeepSeek used model distillation techniques to extract reasoning outputs and copy leading U.S. AI model capabilities in order to expedite development.  The report further found that DeepSeek violated U.S. semiconductor export controls.  The report called on the U.S. to expand export controls and improve enforcement, in addition to preparing for “strategic surprise” arising from rapid advancements in Chinese AI.  Ultimately, the report may help to accelerate possible U.S. Government bans on DeepSeek along the lines of the Kansas ban discussed below.Continue Reading April 2025 AI Developments Under the Trump Administration

As artificial intelligence (AI) tools become increasingly integrated into hiring and other workplace decisions, businesses must navigate a rapidly evolving legal landscape regulating the use of AI. To stay compliant and build trust within the workforce, employers can consider the following best practices for responsible AI deployment in employment contexts.

Continue Reading AI in the Workplace: Best Practices for U.S. Employers

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

I.  Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.

White House Receives Public Comments on AI Action Plan

On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6.  As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”

The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies.  The final AI Action Plan is expected by July of 2025.

NIST Launches New AI Standards Initiatives

The National Institute of Standards & Technology (“NIST”) announced several AI initiatives in March to advance AI research and the development of AI standards.  On March 19, NIST launched its GenAI Image Challenge, an initiative to evaluate generative AI “image generators” and “image discriminators,” i.e., AI models designed to detect if images are AI-generated.  NIST called on academia and industry research labs to participate in the challenge by submitting generators and discriminators to NIST’s GenAI platform.

On March 24, NIST released its final report on Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST AI 100-2e2025, with voluntary guidance for securing AI systems against adversarial manipulations and attacks.  Noting that adversarial attacks on AI systems “have been demonstrated under real-world conditions, and their sophistication and impacts have been increasing steadily,” the report provides a taxonomy of AI system attacks on predictive and generative AI systems at various stages of the “machine learning lifecycle.” 

On March 25, NIST announced the launch of an “AI Standards Zero Drafts project” that will pilot a new process for creating AI standards.  The new standards process will involve the creation of preliminary “zero drafts” of AI standards drafted by NIST and informed by rounds of stakeholder input, which will be submitted to standards developing organizations (“SDOs”) for formal standardization.  NIST outlined four AI topics for the pilot of the Zero Drafts project: (1) AI transparency and documentation about AI systems and data; (2) methods and metrics for AI testing, evaluation, verification, and validation (“TEVV”); (3) concepts and terminology for AI system designs, architectures, processes, and actors; and (4) technical measures for reducing synthetic content risks.  NIST called for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, with no set deadline for submitting responses.Continue Reading March 2025 AI Developments Under the Trump Administration