Photo of Sam Jungyun Choi

Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam's practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

On May 8, 2026, the European Commission (“Commission”) published draft guidelines (“Guidelines”) on the implementation of the transparency obligations under Article 50 of the EU Artificial Intelligence Act (“AI Act”), opening a targeted consultation that runs until June 3, 2026.

The Guidelines are non-binding, but they are the first Commission instrument to provide interpretive guidance across the full scope of Article 50. They were prepared in parallel with the related, but more narrowly scoped, Code of Practice on Transparency of AI-Generated Content (“Code of Practice” or “Code”), the second draft of which was published on March 5, 2026.

Continue Reading 10 Takeaways: European Commission Draft Guidelines on AI Transparency under the EU AI Act

International regulators are finalizing the first global safety standards for Automated Driving Systems (“ADS”). In January, the UN Working Party on Automated/Autonomous and Connected Vehicles (“GRVA”) approved a draft UN Regulation (“UNR”) under the 1958 Agreement and a draft Global Technical Regulation (“GTR”) under the 1998 Agreement, submitting both for adoption by the UN World Forum for Harmonization of Vehicle Regulations.

Developed in parallel to ensure harmonized technical requirements across jurisdictions, the UNR and GTR are expected to be adopted at the 199th WP.29 session in June 2026. In the meantime, work continues on finalizing the accompanying Guidance and Interpretation Document. This post provides an overview of the UN regulatory framework, the legislative status of the ADS instruments as of May 2026, an outline of the key provisions, and implications for companies across the ADS value chain.

Continue Reading UN Regulation and GTR on Automated Driving Systems: Current State of Play

EU Member States are currently designing two possible new Important Projects of Common European Interest (“IPCEIs”) to support the development of AI and compute infrastructure in the EU (together the “Digital IPCEIs”), subject to European Commission (“Commission”) approval.

On 10 March 2026, the “matchmaking” phase under the IPCEI on Artificial Intelligence (“IPCEI-AI”) was officially launched in Berlin. It brings together companies whose AI projects have been pre-selected through national calls for expressions of interest (“CEIs”) in each participating Member State. Its objective is to form European consortia eligible for State co-funding under the IPCEI-AI. National CEIs in 17 participating Member States are now closed; Finland and Lithuania are still expected to launch their CEIs.

A second digital IPCEI on Compute Infrastructure Continuum (“IPCEI-CIC”) was launched in late 2025 by 15 Member States. Several of these participating Member States – including Belgium, Croatia, Estonia, Finland, France, Germany, Hungary, Lithuania, the Netherlands, Romania, Slovakia, and Spain – have not yet launched their CEIs.  

Continue Reading Important Projects of Common European Interest (IPCEIs) on Artificial Intelligence and Compute Infrastructure Continuum

As agentic AI systems move from research labs to enterprise workflows, regulators worldwide are grappling with how to address the potential risks these systems may pose (as discussed in prior blog posts here and here).  In January 2026, Singapore’s Infocomm Media Development Authority (“IMDA”) launched a non-binding Model AI Governance Framework for Agentic AI (“Framework”), just a few months after the Cyber Security Agency released a discussion paper titled “Securing Agentic AI” (“Discussion Paper”).

Together, these documents provide organizations with a structured, operational roadmap to consider when navigating some of the potential security and governance challenges posed by agentic AI.  This blog post highlights some of their key points.

Continue Reading Singapore Issues Governance and Security Guidance for Agentic AI

Artificial intelligence (“AI”) continues to reshape the UK financial services landscape in 2026, with consumers increasingly relying on AI-driven tools for financial guidance and firms deploying more autonomous systems across their businesses.

The Financial Conduct Authority (“FCA”), Prudential Regulation Authority (“PRA”) and Bank of England (“BoE”) (together “the Regulators”) have consistently signalled that AI will be overseen through existing regulatory frameworks, rather than through bespoke AI-specific rules. At the same time, political scrutiny is intensifying, supervisory expectations are rising, and the Regulators are investing heavily in sandbox initiatives and long-term reviews to test whether those frameworks remain fit for purpose.

This article explores the latest policy signals, supervisory initiatives and regulatory tools shaping the UK’s evolving approach to AI in financial services.

Continue Reading UK Financial Services Regulators’ Approach to Artificial Intelligence in 2026

(“Joint Statement”). The Joint Statement is aimed at services likely to be accessed by children that fall within the scope of the Online Safety Act 2023 (“OSA”) and UK data protection legislation, and is designed to help providers comply with both their online safety and data protection obligations when deploying age assurance.

The Joint Statement arrives alongside a broader push from both regulators—including Ofcom’s recent call to action directed at major tech firms, an open letter from the ICO urging platforms to strengthen their age checks, and several enforcement actions by both regulators.

Continue Reading Ofcom and ICO Issue Joint Statement on Age Assurance

On March 2, 2026, the UK Department for Science, Innovation and Technology (“DSIT”) launched its consultation, titled “Growing up in the online world: a national conversation”. The consultation is open until 26 May 2026, after which the government will publish a summary of responses and its proposed approach. DSIT has indicated that it intends to move quickly on the consultation’s findings, drawing on newly granted powers that allow for accelerated implementation of online safety measures.

The consultation seeks views on a wide range of potential measures to strengthen children’s safety and wellbeing online, including more robust age‑assurance mechanisms, a statutory minimum age for social media, raising the UK’s age of digital consent, restrictions on certain features (such as livestreaming and disappearing messages), and new obligations for AI chatbots and generative‑AI services.

DSIT’s proposals could significantly expand regulatory expectations beyond the Online Safety Act 2023 (“OSA”)—including potential age‑based access limits (including differing safeguards as between teens and younger children), feature‑level restrictions, and enhanced duties for AI‑enabled services. Early engagement will be important to ensure that the government takes account of the views of affected service providers and understands the operational and technical implications of the measures proposed.

Continue Reading UK Government Launches Consultation on Children’s Online Experiences, Including New Obligations for AI

On 4 March 2026, the European Commission (the “Commission”) published its proposal for a regulation establishing a framework for the acceleration of its industrial capacity and decarbonisation in strategic sectors (“Proposed Industrial Accelerator Act”, or “Proposed IAA”), accompanied by four annexes. The initiative is intended to strengthen the EU’s industrial base while accelerating decarbonisation in key manufacturing sectors considered strategically important (i.e., energy-intensive industries, net-zero technology manufacturing, and the automotive manufacturing ecosystem). These sectors currently represent less than 15% of EU GDP, and the Commission’s objective is to increase this share to 20% by 2035. The Proposed IAA was delayed three times before publication and underwent significant rewriting, which reflects both internal debates within the Commission and diverging reactions from Member States.  It also reflects the challenges posed by the broader geopolitical context, as the Commission aims to address economic security concerns through industrial policies whilst navigating international trade relationships and commitments.

The Proposed IAA introduces a regulatory framework combining three policy tools. First, it establishes demand-side measures designed to create “lead markets” for low-carbon and “Made in EU” industrial products through public procurement and certain public support schemes. Second, it introduces conditions for allowing certain foreign direct and indirect investments (“FDI”) in strategic sectors, aimed at maximising the industrial benefits of such investments within the EU. Third, it includes measures to streamline permitting procedures and facilitate industrial clustering, with the objective of accelerating the deployment of manufacturing projects.

This blog summarises the key aspects of each tool and their potential implications for companies active in the covered industries or looking to invest in the covered industries.

Continue Reading European Commission Publishes the Proposed Industrial Accelerator Act

On February 19, 2026, the UK Court of Appeal handed down its decision in DSG Retail Limited v The Information Commissioner [2026] EWCA Civ 140. The Court ruled that a controller’s data security duty applies to all personal data for which it acts as controller – irrespective of whether the information would constitute personal data in the hands of a third party (in this case, an attacker). Note that the case is concerned with events before the GDPR came into force, so the legal context is provided by UK Data Protection Act 1998 (“DPA 1998”), although the Court did take into account more recent jurisprudence, including CJEU case law.

The case adds useful colour to ongoing debates surrounding the definition of “personal data.” The Court of Appeal confirmed that a controller’s duty to implement appropriate measures to protect personal data applies to data that is “personal” from the perspective of the controller —even if a third-party attacker could not identify individuals from the exfiltrated dataset. This dovetails with the SRB v EDPS’s clarification that whether data is “personal” can depend on the context, while a controller’s obligations (such as transparency) must be assessed from the controller’s perspective at the relevant time (which, for the transparency principle, is at the time of collection of the data). (For more information on SRB v EDPS, see our prior post here.)

Continue Reading UK Court of Appeal Rules on the Concept of Personal Data in the Context of Data Security

On February 18, 2026, the European Data Protection Board (“EDPB”) published its Report on Stakeholder Event on Anonymisation and Pseudonymisation of 12 December 2025 (the Report). The Report summarises feedback from a remote stakeholder event convened to inform the EDPB’s ongoing work on Guidelines 01/2025 on Pseudonymisation (version for public consultation available here) and forthcoming guidance on anonymisation. The event gathered input from 115 participants spanning industry, NGOs, academia, law firms, and public sector bodies.

The objective of the Report is to capture stakeholder insights on how the General Data Protection Regulation (“GDPR”) applies to anonymisation and pseudonymisation, particularly following the Court of Justice of the European Union’s (“CJEU”) judgment in EDPS v SRB (C‑413/23 P). (See our previous blog post here.)

Continue Reading EDPB Publishes Report on Stakeholder Event on Anonymisation and Pseudonymisation