Artificial Intelligence (AI)

Artificial intelligence (“AI”) continues to reshape the UK financial services landscape in 2026, with consumers increasingly relying on AI-driven tools for financial guidance and firms deploying more autonomous systems across their businesses.

The Financial Conduct Authority (“FCA”), Prudential Regulation Authority (“PRA”) and Bank of England (“BoE”) (together “the Regulators”) have consistently signalled that AI will be overseen through existing regulatory frameworks, rather than through bespoke AI-specific rules. At the same time, political scrutiny is intensifying, supervisory expectations are rising, and the Regulators are investing heavily in sandbox initiatives and long-term reviews to test whether those frameworks remain fit for purpose.

This article explores the latest policy signals, supervisory initiatives and regulatory tools shaping the UK’s evolving approach to AI in financial services.

Continue Reading UK Financial Services Regulators’ Approach to Artificial Intelligence in 2026

This update highlights key legislative and regulatory developments in the first quarter of 2026 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and Internet of Things (“IoT”).

I. Federal AI Legislative Developments

In the first quarter, members of Congress introduced several AI bills related to nonconsensual images, chatbots

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2026

On March 17, Colorado Governor Jared Polis released a draft bill that would substantially overhaul the Colorado AI Act, replacing its core requirements with a narrower regime focused on disclosure, recordkeeping, and consumer notice requirements for “automated decision-making technology” (“ADMT”).  The proposal, which is still in draft form and

Continue Reading Colorado Officials Push to Repeal and Replace the Colorado AI Act

On 18 March 2026, the European Parliament’s Committee on the Internal Market and Consumer Protection (“IMCO”) and the Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) adopted their joint negotiating position on the European Commission’s proposed Digital Omnibus on AI (which we previously analysed here). The position will now proceed to a plenary vote, expected on 26 March 2026. The Council of the EU had previously adopted its negotiating position on 13 March 2026. This sets up trilogue negotiations between the Parliament, Council, and Commission.

Continue Reading MEPs Adopt Joint Position on Proposed Digital Omnibus on AI

As artificial intelligence (AI) technologies continue to advance and states increasingly pass legislation to regulate AI development and use, Congress and the White House are proposing comprehensive nationwide laws.

New proposals from the White House Office of Science and Technology Policy (OSTP) and Senator Marsha Blackburn (R-TN) offer comprehensive approaches

Continue Reading White House, Blackburn Introduce Visions of Comprehensive Federal AI Policy

On February 10, 2026, federal district court Judge Jed S. Rakoff ruled from the bench in the Southern District of New York that the attorney-client privilege and the work product doctrine did not protect legal strategy materials that a criminal defendant generated using a generative AI tool, when he used a public version of the tool and was not instructed by his attorney to generate these materials.  On February 17, 2026, the court issued a written memorandum explaining its reasoning.  

The question presented – an issue of first impression – was: “whether when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the communications protected by attorney-client privilege or the work product doctrine?”  The court’s answer was no given the unique circumstances of the case – namely, that no lawyer was involved in the back-and-forth with the AI tool, and the tool itself was a public (i.e., non-confidential) version. 

Below, we summarize the background of the case, the decision, and key takeaways on AI and Legal Privilege.

Continue Reading AI and Legal Privilege: Key Takeaways from US v. Heppner

On January 17th, 2026, the Biodiversity Beyond National Jurisdiction (“BBNJ”) Agreement, also known as the “High Seas Treaty”, entered into force.  For the first time, companies that use marine genetic resources (“MGRs”) and digital sequence information (“DSI”) originating from areas beyond national jurisdiction may be required to share monetary and non-monetary benefits at a global level.

This marks a significant expansion of access and benefit-sharing (“ABS”) obligations for companies.  Until now, under the Convention on Biological Diversity (“CBD”) and its Nagoya Protocol, ABS obligations applied only to genetic resources originating within national jurisdictions.  The BBNJ Agreement fundamentally changes this landscape: companies in pharmaceuticals, biotechnology, cosmetics, food and feed that rely on marine-derived compounds, microorganisms or genetic data may now face new reporting and annual payment obligations.

Companies should not assume a long transition period.  Implementation is already advancing.  The European Commission has published a draft Directive (“draft EU Directive”), and the United Kingdom adopted the Biodiversity Beyond National Jurisdiction Act 2026 (“UK Act”) on February 12th, 2026. Companies should therefore assess now whether their R&D pipelines, data use practices, or product portfolios fall within scope.

In this blog, we examine how the BBNJ Agreement and its EU and UK implementation could affect companies using MGRs and DSI, and identify the key compliance risks and strategic questions for in-house counsel and senior management.

Continue Reading Navigating the new UN High Seas Treaty: Key Compliance Risks for Life Sciences Companies

In June 2025, the European Parliament (“EP”) published its draft report on “Copyright and generative artificial intelligence – opportunities and challenges” (available here). The draft report calls on the European Commission to make a series of changes to the way that copyright is protected in the age of generative AI (“GenAI”). The EP notes the challenges in finding a balance between respecting existing laws and protecting the rights of content creators on the one hand, while not hindering the development of AI technologies in the European Union on the other. In its report, the EP focuses on the perceived copyright-related risks posed at the GenAI training stage and the GenAI output stage.

Continue Reading European Parliament Proposes Changes to Copyright Protection in the Age of Generative AI

On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

The Report does not make specific policy recommendations; instead, it synthesizes scientific evidence to provide an evidence base for decision-makers. This blog summarizes the Report’s key findings across its three central questions: (i) what can GPAI do today, and how might its capabilities change? (ii) what emerging risks does it pose? And (iii) what risk management approaches exist?

Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards

AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.

Continue Reading ICO Shares Early Views on Agentic AI & Data Protection