Artificial Intelligence (AI)

On 18 March 2026, the European Parliament’s Committee on the Internal Market and Consumer Protection (“IMCO”) and the Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) adopted their joint negotiating position on the European Commission’s proposed Digital Omnibus on AI (which we previously analysed here). The position will now proceed to a plenary vote, expected on 26 March 2026. The Council of the EU had previously adopted its negotiating position on 13 March 2026. This sets up trilogue negotiations between the Parliament, Council, and Commission.Continue Reading MEPs Adopt Joint Position on Proposed Digital Omnibus on AI

As artificial intelligence (AI) technologies continue to advance and states increasingly pass legislation to regulate AI development and use, Congress and the White House are proposing comprehensive nationwide laws.

New proposals from the White House Office of Science and Technology Policy (OSTP) and Senator Marsha Blackburn (R-TN) offer comprehensive approaches

Continue Reading White House, Blackburn Introduce Visions of Comprehensive Federal AI Policy

On February 10, 2026, federal district court Judge Jed S. Rakoff ruled from the bench in the Southern District of New York that the attorney-client privilege and the work product doctrine did not protect legal strategy materials that a criminal defendant generated using a generative AI tool, when he used a public version of the tool and was not instructed by his attorney to generate these materials.  On February 17, 2026, the court issued a written memorandum explaining its reasoning.  

The question presented – an issue of first impression – was: “whether when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the communications protected by attorney-client privilege or the work product doctrine?”  The court’s answer was no given the unique circumstances of the case – namely, that no lawyer was involved in the back-and-forth with the AI tool, and the tool itself was a public (i.e., non-confidential) version. 

Below, we summarize the background of the case, the decision, and key takeaways on AI and Legal Privilege.Continue Reading AI and Legal Privilege: Key Takeaways from US v. Heppner

On January 17th, 2026, the Biodiversity Beyond National Jurisdiction (“BBNJ”) Agreement, also known as the “High Seas Treaty”, entered into force.  For the first time, companies that use marine genetic resources (“MGRs”) and digital sequence information (“DSI”) originating from areas beyond national jurisdiction may be required to share monetary and non-monetary benefits at a global level.

This marks a significant expansion of access and benefit-sharing (“ABS”) obligations for companies.  Until now, under the Convention on Biological Diversity (“CBD”) and its Nagoya Protocol, ABS obligations applied only to genetic resources originating within national jurisdictions.  The BBNJ Agreement fundamentally changes this landscape: companies in pharmaceuticals, biotechnology, cosmetics, food and feed that rely on marine-derived compounds, microorganisms or genetic data may now face new reporting and annual payment obligations.

Companies should not assume a long transition period.  Implementation is already advancing.  The European Commission has published a draft Directive (“draft EU Directive”), and the United Kingdom adopted the Biodiversity Beyond National Jurisdiction Act 2026 (“UK Act”) on February 12th, 2026. Companies should therefore assess now whether their R&D pipelines, data use practices, or product portfolios fall within scope.

In this blog, we examine how the BBNJ Agreement and its EU and UK implementation could affect companies using MGRs and DSI, and identify the key compliance risks and strategic questions for in-house counsel and senior management.Continue Reading Navigating the new UN High Seas Treaty: Key Compliance Risks for Life Sciences Companies

In June 2025, the European Parliament (“EP”) published its draft report on “Copyright and generative artificial intelligence – opportunities and challenges” (available here). The draft report calls on the European Commission to make a series of changes to the way that copyright is protected in the age of generative AI (“GenAI”). The EP notes the challenges in finding a balance between respecting existing laws and protecting the rights of content creators on the one hand, while not hindering the development of AI technologies in the European Union on the other. In its report, the EP focuses on the perceived copyright-related risks posed at the GenAI training stage and the GenAI output stage.Continue Reading European Parliament Proposes Changes to Copyright Protection in the Age of Generative AI

On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

The Report does not make specific policy recommendations; instead, it synthesizes scientific evidence to provide an evidence base for decision-makers. This blog summarizes the Report’s key findings across its three central questions: (i) what can GPAI do today, and how might its capabilities change? (ii) what emerging risks does it pose? And (iii) what risk management approaches exist?Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards

AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.Continue Reading ICO Shares Early Views on Agentic AI & Data Protection

On January 20, 2026, the European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) (together, the “Authorities”) adopted Joint Opinion 1/2026 on the European Commission’s proposal to amend the EU AI Act (hereafter the “Proposal”, summarized in our previous blog). Overall, the Authorities acknowledge the complexity of the AI Act and agree that targeted simplifications can support legal certainty and efficient administration. However, they warn that simplification should not result in lowering the protection of fundamental rights, including data protection rights. This blog outlines some of the Authorities’ main recommendations as expressed in their Joint Opinion.Continue Reading European Data Protection Authorities Issue Joint Opinion on the Digital Omnibus on AI

On December 16, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (“Cyber AI Profile” or “Profile”).  According to the draft, the Cyber AI Profile is intended to “provide guidelines for managing cybersecurity risk related to AI systems [and] identify[] opportunities for using AI to enhance cybersecurity capabilities.”  The draft Profile uses the existing voluntary NIST Cybersecurity Framework (“CSF”) 2.0 — which “provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks” — and overlays three AI Focus Areas (Secure, Detect, Thwart) on top of the CSF’s outcomes (Functions, Categories, and Subcategories) to suggest considerations for organizations to prioritize when securing AI implementations, using AI to enhance cybersecurity defenses, or defending against adversarial uses of AI.  This draft guidance will likely be familiar to organizations that already leverage the CSF 2.0 in their cybersecurity programs and might be complimentary to existing frameworks that organizations already have in place.  Even so, the outcomes are designed to be flexible such that a range of organizations (with mature or novel programs) can leverage the guidance to help manage AI-related cybersecurity risk.  Continue Reading NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for Artificial Intelligence for Public Comment

On December 19, New York Governor Kathy Hochul (D) signed the Responsible AI Safety & Education (“RAISE”) Act into law, making New York the second state in the nation to codify public safety disclosure and reporting requirements for developers of frontier AI models.  Prior to signing, Governor Hochul secured several

Continue Reading New York Governor Signs Frontier AI Safety Legislation