Photo of Sam Jungyun Choi

Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam's practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

On March 2, 2026, the UK Department for Science, Innovation and Technology (“DSIT”) launched its consultation, titled “Growing up in the online world: a national conversation”. The consultation is open until 26 May 2026, after which the government will publish a summary of responses and its proposed approach. DSIT has indicated that it intends to move quickly on the consultation’s findings, drawing on newly granted powers that allow for accelerated implementation of online safety measures.

The consultation seeks views on a wide range of potential measures to strengthen children’s safety and wellbeing online, including more robust age‑assurance mechanisms, a statutory minimum age for social media, raising the UK’s age of digital consent, restrictions on certain features (such as livestreaming and disappearing messages), and new obligations for AI chatbots and generative‑AI services.

DSIT’s proposals could significantly expand regulatory expectations beyond the Online Safety Act 2023 (“OSA”)—including potential age‑based access limits (including differing safeguards as between teens and younger children), feature‑level restrictions, and enhanced duties for AI‑enabled services. Early engagement will be important to ensure that the government takes account of the views of affected service providers and understands the operational and technical implications of the measures proposed.Continue Reading UK Government Launches Consultation on Children’s Online Experiences, Including New Obligations for AI

On 4 March 2026, the European Commission (the “Commission”) published its proposal for a regulation establishing a framework for the acceleration of its industrial capacity and decarbonisation in strategic sectors (“Proposed Industrial Accelerator Act”, or “Proposed IAA”), accompanied by four annexes. The initiative is intended to strengthen the EU’s industrial base while accelerating decarbonisation in key manufacturing sectors considered strategically important (i.e., energy-intensive industries, net-zero technology manufacturing, and the automotive manufacturing ecosystem). These sectors currently represent less than 15% of EU GDP, and the Commission’s objective is to increase this share to 20% by 2035. The Proposed IAA was delayed three times before publication and underwent significant rewriting, which reflects both internal debates within the Commission and diverging reactions from Member States.  It also reflects the challenges posed by the broader geopolitical context, as the Commission aims to address economic security concerns through industrial policies whilst navigating international trade relationships and commitments.

The Proposed IAA introduces a regulatory framework combining three policy tools. First, it establishes demand-side measures designed to create “lead markets” for low-carbon and “Made in EU” industrial products through public procurement and certain public support schemes. Second, it introduces conditions for allowing certain foreign direct and indirect investments (“FDI”) in strategic sectors, aimed at maximising the industrial benefits of such investments within the EU. Third, it includes measures to streamline permitting procedures and facilitate industrial clustering, with the objective of accelerating the deployment of manufacturing projects.

This blog summarises the key aspects of each tool and their potential implications for companies active in the covered industries or looking to invest in the covered industries.Continue Reading European Commission Publishes the Proposed Industrial Accelerator Act

On February 19, 2026, the UK Court of Appeal handed down its decision in DSG Retail Limited v The Information Commissioner [2026] EWCA Civ 140. The Court ruled that a controller’s data security duty applies to all personal data for which it acts as controller – irrespective of whether the information would constitute personal data in the hands of a third party (in this case, an attacker). Note that the case is concerned with events before the GDPR came into force, so the legal context is provided by UK Data Protection Act 1998 (“DPA 1998”), although the Court did take into account more recent jurisprudence, including CJEU case law.

The case adds useful colour to ongoing debates surrounding the definition of “personal data.” The Court of Appeal confirmed that a controller’s duty to implement appropriate measures to protect personal data applies to data that is “personal” from the perspective of the controller —even if a third-party attacker could not identify individuals from the exfiltrated dataset. This dovetails with the SRB v EDPS’s clarification that whether data is “personal” can depend on the context, while a controller’s obligations (such as transparency) must be assessed from the controller’s perspective at the relevant time (which, for the transparency principle, is at the time of collection of the data). (For more information on SRB v EDPS, see our prior post here.)Continue Reading UK Court of Appeal Rules on the Concept of Personal Data in the Context of Data Security

On February 18, 2026, the European Data Protection Board (“EDPB”) published its Report on Stakeholder Event on Anonymisation and Pseudonymisation of 12 December 2025 (the Report). The Report summarises feedback from a remote stakeholder event convened to inform the EDPB’s ongoing work on Guidelines 01/2025 on Pseudonymisation (version for public consultation available here) and forthcoming guidance on anonymisation. The event gathered input from 115 participants spanning industry, NGOs, academia, law firms, and public sector bodies.

The objective of the Report is to capture stakeholder insights on how the General Data Protection Regulation (“GDPR”) applies to anonymisation and pseudonymisation, particularly following the Court of Justice of the European Union’s (“CJEU”) judgment in EDPS v SRB (C‑413/23 P). (See our previous blog post here.)Continue Reading EDPB Publishes Report on Stakeholder Event on Anonymisation and Pseudonymisation

On 16 December 2025, the European Commission presented the Automotive Package (the “Package”), a set of interlinked legislative and policy initiatives aimed at supporting the European automotive sector’s transition to clean mobility. The Package has four core components: (i) a proposal to revise the CO₂ emission performance standards for cars and vans, (ii) the so-called “Battery Booster Strategy”, (iii) a proposal on greening corporate vehicle fleets, and (iv) a proposal for an “Automotive Omnibus” regulation that would amend several pieces of automotive legislation to simplify regulations for vehicle manufacturers. Together, these initiatives signal a material recalibration of the EU’s approach to vehicle decarbonization.Continue Reading The EU Automotive Package: Increased Compliance Flexibility, but Growing “Made in the EU” Conditionality

On 3 December 2025, the European Commission adopted the RESourceEU Action Plan, signaling that Europe’s industrial competitiveness will increasingly depend on its ability to secure and diversify critical raw material (“CRM”) supply chains.  For companies, inside and outside the EU, RESourceEU is more than a technical update: it marks a policy shift toward a more interventionist and security-driven approach to CRM governance.

The analysis below outlines the drivers behind the initiative, its main components, and the implications for multinationals trading into the EU.Continue Reading RESourceEU Action Plan – Strengthening the EU’s Access to Critical Raw Materials

On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.

There are nine categories of AI capability indicators, each one presented on a five-level scale mapping AI progression toward full human equivalence, with level 5 representing the most challenging capabilities for AI systems to attain. Each category rates AI performance and assumes human equivalent capability according to the latest available evidence as follows:

  • Language – ranges from basic keyword recognition (Level 1) to contextually aware discourse generation and open-ended creative writing (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: reliable understanding and generation of semantic meaning using multi-modal language.
  • Social interaction – ranges from social cue interpretation (Level 1) to representation of sophisticated emotion intelligence and multi-party conversational fluency (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: basic social perception with the ability to slightly adapt based on experience, emotions detected through tone and context, and limited social memory.
  • Problem solving – ranges from rule-based task execution (Level 1) to new scenarios that require adaptive reasoning, long-term planning, and multi-step inference (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: integration of qualitative and quantitative reasoning to address complex problems and capable of handling multiple qualitative states and predicting how systems may evolve or change over time.
  • Creativity – measures originality and generative capacity in art ranging from template-based generation (Level 1) to creation of entirely novel concepts (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: generation of output that deviates considerably from the training data and generalization of skills to new tasks and integrate ideas across domains.
  • Metacognition and critical thinking – ranges from basic interpretation or recognition of information (Level 1) to managing complex trade-offs between goals, resources, and necessary skills (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: monitoring and adjustment of the system’s own understanding and approach according to each problem.
  • Knowledge, learning, and memory – ranges from data ingestion efficiency and retention (Level 1) to insight-generation from disparate knowledge sources (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: understanding semantics of information through distributed representations and generalization to novel situations.
  • Vision – ranges from basic object recognition (Level 1) to dynamic scene understanding and multi-object tracking under varied environmental conditions (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: adapting to variations in target object appearance and lighting, performing multiple subtasks, and coping with known variations in data and situations.
  • Manipulation – ranges from fine motor control in robotics like picking up simple items (Level 1) to dexterous manipulation of deformable objects (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: handling different object shapes and moderately pliable materials and operating in controlled environments with low to moderate clutter.
  • Robotic intelligence – integrates multiple subdomains like navigation, manipulation, and perception ranging from pre-programmed action (Level 1) to fully autonomous, self-learning robotic agents (Level 5). The OECD considers that the capability level of currently available robotic systems is Level 2: operating in partially known and semi-structured environments with some well-defined variability.

Continue Reading OECD Introduces AI Capability Indicators for Policymakers

On 4 June 2025, the European Commission published a decision recognising 13 critical raw material projects located in non-EU countries as “Strategic Projects” under the Critical Raw Materials Act (“CRMA”, Regulation (EU) 2024/1252). This first set of Strategic Projects based outside the EU adds to the 47 Strategic Projects based within the EU announced earlier this year. These Strategic Projects are recognized as significantly contributing to the security of the EU’s supply of strategic raw materials, and will benefit from preferential access to finance and other advantages. For more information on the CRMA and the framework for Strategic Projects, see our previous blog post here.Continue Reading EU Designates 13 Non-EU Critical Raw Materials Projects as Strategic

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.

Defining an “AI System” Under the AI Act

The AI Act (Article 3(1)) defines an “AI system” as (1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs; (6) such as predictions, content, recommendations, or decisions; (7) that can influence physical or virtual environments. The AI System Definition Guidelines provide explanatory guidance on each of these seven elements.Continue Reading European Commission Guidelines on the Definition of an “AI System”

AI chatbots are transforming how businesses handle consumer inquiries and complaints, offering speed and availability that traditional channels often cannot match.  However, the European Commission’s recent Digital Fairness Act Fitness Check has spotlighted a gap: EU consumers currently lack a cross-sectoral right to demand human contact when interacting with AI chatbots in business-to-consumer settings.  It is still unclear whether and how the European Commission is proposing to address this.  The Digital Fairness Act could do so, but the Commission’s proposal is only planned to be published in the 3rd quarter of 2026.  This post highlights key consumer protection considerations for companies deploying AI chatbots in the EU market.

AI Chatbots Cannot Be the Only Contact Channel

Under EU law–particularly the Consumer Rights Directive (“CRD”) and the eCommerce Directive–consumers must have access to traditional communication channels such as the trader’s postal address, telephone number, and email address.  The Court of Justice of the EU has made clear that consumers must be able to contact traders directly, quickly, and effectively (Case C-649/17).  While chatbots can assist, they cannot replace mandatory human contact options.

AI Chatbots as Supplementary Communication Channels

The CRD requires traders to disclose their primary contact details before concluding a contract, but does not prohibit offering AI chatbots as additional communication tools.  Where chatbots enable consumers to retain durable records of their interactions – including timestamps – traders should inform consumers about that.  Durable records are defined as information stored in a medium accessible and unalterable for future reference, such as emails or downloadable files.

In any event, certain communications, such as the acknowledgment of a consumer’s right of withdrawal, must be provided in a “durable medium,” ensuring consumers have a stable and accessible record of important contractual information.

Human Oversight and the Right to Human InterventionContinue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions