Artificial Intelligence (AI)

On September 16, 2025, the European Commission launched a call for evidence to collect feedback and best practices on simplifying several key areas of the EU digital rulebook, ahead of its planned Digital Omnibus package. This initiative targets legislation related to data, cybersecurity, and artificial intelligence, aiming to reduce administrative burdens and compliance costs for businesses while preserving high standards of fairness, security, and privacy online.Continue Reading Commission Collects Feedback to Simplify Rules on Data, Cybersecurity and Artificial Intelligence in Upcoming Digital Omnibus

The Food and Drug Administration (FDA) has announced that its Digital Health Advisory Committee (DHAC) will meet on November 6, 2025, to discuss and make recommendations on the topic of genAI-enabled digital mental health medical devices.  The DHAC will discuss potential “benefits, risks to health, and risk mitigations” for such devices, “including premarket evidence and postmarket monitoring considerations.”Continue Reading FDA to Convene Advisory Committee Meeting on GenAI Mental Health Devices

On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership.  In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act (S. 2750), which would establish a federal AI regulatory sandbox program that would waive or modify federal agency regulations and guidance for AI developers and deployers.  Collectively, the AI framework and the SANDBOX Act mark the first congressional effort to implement the recommendations of the AI Action Plan the Trump Administration released on July 23. Continue Reading Senator Cruz Unveils AI Framework and Regulatory Sandbox Bill

Brazil’s National Institute of Intellectual Property (“INPI”) initiated a public consultation on new guidance for the review of patent applications related to artificial intelligence (“AI”). The draft guidance document consolidates three previous INPI regulations and best practices adopted by other patent offices.

Click here to read the full alert on

Continue Reading Brazilian Government Opens Consultation on Artificial Intelligence-Related Patent Applications

As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D).  The AI bills, some of which have already passed both chambers, reflect

Continue Reading California Lawmakers Advance Suite of AI Bills

On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership.  In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act

Continue Reading Senator Cruz Unveils AI Framework and Regulatory Sandbox Bill

As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D).  The AI bills, some of which have already passed both chambers, reflect

Continue Reading California Lawmakers Advance Suite of AI Bills

On July 24, 2025, the European Parliament (EP) published a study entitled Artificial Intelligence and Civil Liability – A European Perspective. The study considers some of the EU’s existing and proposed liability frameworks, notably the revised Product Liability Directive (PLDr) and the AI Liability Directive (AILD), which was proposed by the European Commission only to be later withdrawn. The study concludes that neither instrument sufficiently addresses the full scope of product liability risks and defects uniquely posed by high-risk AI systems, as that concept is defined by the EU AI Act. Therefore, it calls for the creation of a dedicated strict liability framework, specifically designed to tackle the particular liability risks that these systems are said to give rise to. While it is too early to predict whether other key European stakeholders will support such a framework and bring it to fruition, this development is an important one to monitor closely for those creating or working with high-risk AI systems.Continue Reading European Parliament Study Recommends Strict Liability Regime for High-Risk AI Systems

This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.

I. Federal AI Legislative Developments

In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms.  Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law.  The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI.  Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included.  See our blog post on this topic for more information.

Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery.  For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI.  See our blog post on this topic for more information.  Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update

On July 29, 2025, the National Institute of Standards & Technology (“NIST”) unveiled an outline for preliminary, stakeholder-driven standards, known as a “zero draft”, for AI testing, evaluation, verification and validation (“TEVV”).  This outline is part of NIST’s AI Standards Zero Drafts pilot project, which was announced on March 25, 2025, as we previously reported. The goal is to create a flexible, high-level framework for companies to design their own AI testing and validation procedures. Of note, NIST is not prescribing exact methods for testing and validation. Instead, it offers a structure around key terms, lifecycle stages, and guiding principles that align with future international standards. NIST has asked for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, and feedback is open until September 12, 2025.

The NIST outline breaks AI TEVV into several foundational elements, a non-exhaustive list of which includes:Continue Reading NIST Welcomes Comments for AI Standards Zero Drafts Project