Photo of Jadzia Pierce

Jadzia Pierce

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior to rejoining Covington in 2026, Jadzia served as Global Data Protection Officer at Microsoft, where she oversaw and advised on the company’s GDPR/UK GDPR program and acted as a primary point of contact for supervisory authorities on matters including AI, children’s data, advertising, and data subject rights.

Jadzia previously was Director of Microsoft’s Global Privacy Policy function and served as Associate General Counsel for Cybersecurity at McKinsey & Company. She began her career at Covington, advising Fortune 100 companies on privacy, cybersecurity, incident preparedness and response, investigations, and data driven transactions.

At Covington, Jadzia helps clients operationalize defensible, scalable approaches to AI enabled products and services, aligning privacy and security obligations with rapidly evolving regulatory frameworks across jurisdictions—with a particular focus on anticipating enforcement trends and navigating inter regulator dynamics.

On February 19, 2026, the UK Court of Appeal handed down its decision in DSG Retail Limited v The Information Commissioner [2026] EWCA Civ 140. The Court ruled that a controller’s data security duty applies to all personal data for which it acts as controller – irrespective of whether the information would constitute personal data in the hands of a third party (in this case, an attacker). Note that the case is concerned with events before the GDPR came into force, so the legal context is provided by UK Data Protection Act 1998 (“DPA 1998”), although the Court did take into account more recent jurisprudence, including CJEU case law.

The case adds useful colour to ongoing debates surrounding the definition of “personal data.” The Court of Appeal confirmed that a controller’s duty to implement appropriate measures to protect personal data applies to data that is “personal” from the perspective of the controller —even if a third-party attacker could not identify individuals from the exfiltrated dataset. This dovetails with the SRB v EDPS’s clarification that whether data is “personal” can depend on the context, while a controller’s obligations (such as transparency) must be assessed from the controller’s perspective at the relevant time (which, for the transparency principle, is at the time of collection of the data). (For more information on SRB v EDPS, see our prior post here.)Continue Reading UK Court of Appeal Rules on the Concept of Personal Data in the Context of Data Security

On February 18, 2026, the European Data Protection Board (“EDPB”) published its Report on Stakeholder Event on Anonymisation and Pseudonymisation of 12 December 2025 (the Report). The Report summarises feedback from a remote stakeholder event convened to inform the EDPB’s ongoing work on Guidelines 01/2025 on Pseudonymisation (version for public consultation available here) and forthcoming guidance on anonymisation. The event gathered input from 115 participants spanning industry, NGOs, academia, law firms, and public sector bodies.

The objective of the Report is to capture stakeholder insights on how the General Data Protection Regulation (“GDPR”) applies to anonymisation and pseudonymisation, particularly following the Court of Justice of the European Union’s (“CJEU”) judgment in EDPS v SRB (C‑413/23 P). (See our previous blog post here.)Continue Reading EDPB Publishes Report on Stakeholder Event on Anonymisation and Pseudonymisation

On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

The Report does not make specific policy recommendations; instead, it synthesizes scientific evidence to provide an evidence base for decision-makers. This blog summarizes the Report’s key findings across its three central questions: (i) what can GPAI do today, and how might its capabilities change? (ii) what emerging risks does it pose? And (iii) what risk management approaches exist?Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards

AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.Continue Reading ICO Shares Early Views on Agentic AI & Data Protection