Photo of Madelaine Harrington

Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

On July 10, 2025, the AI Office published the final version of the Code of Practice for General-Purpose AI Models (the “Code”).  The Code is a voluntary compliance tool designed to help companies comply with the AI Act obligations for providers of general-purpose AI (“GPAI”) models.  The AI Office and the AI Board will now assess the Code and may approve it via an adequacy decision.  Once approved, the European Commission is expected to formally adopt the Code via an implementing act.

The Code details how providers of GPAI models may comply with their obligations under the AI Act.  It comprises three chapters, each covering different aspects of AI Act compliance: (i) transparency, (ii) copyright, and (iii) safety and security.  The first two chapters apply to all providers of GPAI models, while the third addresses obligations for providers of GPAI models with systemic risk.  By adhering to the Code, signatories agree to implement their AI practices in accordance with the commitments contained in the Code.Continue Reading AI Office Publishes Final Version of the Code of Practice for General-Purpose AI Models

On 14 July 2025, the European Commission published its final guidelines on the protection of minors under the Digital Services Act (“DSA”) (the “Guidelines”). The Guidelines are intended to provide guidance to providers of online platforms that are “accessible to minors” on meeting their obligations to “put in place appropriate and proportionate measures to ensure a high level of privacy, safety, and security of minors, on their service” (DSA, Art. 28(1)).

The European Commission published a draft version of the guidelines for consultation on 13 May 2025 (“Draft Guidelines”) (see our blog post here). The final Guidelines include some amendments to the Draft Guidelines on the basis of the feedback received during consultation, clarifying and building out further the recommended measures.

Although the Guidelines are non-binding, the Commission has made clear that it intends to use the Guidelines as a “significant and meaningful” benchmark when assessing in-scope providers’ compliance with Article 28(1) DSA.Continue Reading European Commission Makes New Announcements on the Protection of Minors Under the Digital Services Act

EU lawmakers are reportedly considering a delay in the enforcement of certain provisions of the EU Artificial Intelligence Act (AI Act). While the AI Act formally entered into force on 1 August 2024, its obligations apply on a rolling basis. Requirements related to AI literacy and the prohibition

Continue Reading European Commission hints at delaying the AI Act

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.

Defining an “AI System” Under the AI Act

The AI Act (Article 3(1)) defines an “AI system” as (1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs; (6) such as predictions, content, recommendations, or decisions; (7) that can influence physical or virtual environments. The AI System Definition Guidelines provide explanatory guidance on each of these seven elements.Continue Reading European Commission Guidelines on the Definition of an “AI System”

On November 4, 2024, the European Commission (“Commission”) adopted the implementing regulation on transparency reporting under the Digital Services Act (“DSA”). The implementing regulation is intended to harmonise the format and reporting time periods of the transparency reports required by the DSA.

Transparency reporting is required under Articles 15, 24 and 42 of the DSA. Obligations vary depending on whether the reporting entity is a provider of an intermediary service, hosting service, online platform, very large online platform (“VLOP”) or very large online search engine (“VLOSE”) (collectively, “Providers”).

The implementing regulation requires Providers to use the templates set out in Annex 1 of that regulation when complying with their DSA transparency reporting obligations. Providers must complete and publish this information in accordance with the instructions set out in Annex 2.

The Templates

Annex 1 contains two templates: (1) a “Quantitative Template” consisting of eight sections and (2) a “Qualitative Template” consisting of one section (collectively, the “Templates”):

  • The Quantitative Template is to be used to provide quantitative machine-readable information on content moderation. Each of the eight sections sets out tables where Providers can input standardised information on issues such as Member State orders to act against illegal content, notices submitted under the DSA, own-initiative content moderation, and handling of complaints through their internal complaint mechanisms.
  • The Qualitative Template is to be used to provide qualitative information on content moderation. It requires Providers to input free text descriptions under a range of indicators such as “Summary of the content moderation engaged in at the providers’ own initiative” or “Safeguards applied to the use of automated means.”

Continue Reading European Commission Adopts Implementing Regulation on DSA Transparency Reporting Obligations

By Madelaine Harrington & Marty Hansen on July 17, 2024

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on

Continue Reading EU Artificial Intelligence Act Published

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its 

Continue Reading EU Parliament Adopts AI Act