Photo of Marty Hansen

Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

In case you missed it before the holidays: on 17 December 2024, the UK Government published a consultation on “Copyright and Artificial Intelligence” in which it examines proposals to change the UK’s copyright framework in light of the growth of the artificial intelligence (“AI”) sector.   

The Government sets out the following core objectives for a new copyright and AI framework:

  • Support right holders’ control of their content and, specifically, their ability to be remunerated when AI developers use that content, such as via licensing regimes;
  • Support the development of world-leading AI models in the UK, including by facilitating AI developers’ ability to access and use large volumes of online content to train their models; and
  • Promote greater trust between the creative and AI sectors (and among consumers) by introducing transparency requirements on AI developers about the works they are using to train AI models, and potentially requiring AI-generated outputs to be labelled.

In this post, we consider some of the most noteworthy aspects of the Government’s proposal.

  • The proposed regime would include a new text and data mining (TDM) exception

First and foremost, the Government is contemplating the introduction of a new TDM exception that would apply to TDM conducted for any purpose, including commercial purposes. The Government does not set out how it would define TDM, but refers to data mining as “the use of automated techniques to analyse large amounts of information (for AI training or other purposes)”. This new exception would apply where:Continue Reading UK Government Proposes Copyright & AI Reform 

On November 6, 2024, the UK Information Commissioner’s Office (ICO) released its AI Tools in recruitment audit outcomes report (“Report”). This Report documents the ICO’s findings from a series of consensual audit engagements conducted with AI tool developers and providers. The goal of this process was to assess compliance with data protection law, identify any risks or room for improvement, and provide recommendations for AI providers and recruiters. The audits ran across sourcing, screening, and selection processes in recruitment, but did not include AI tools used to process biometric data, or generative AI. This work follows the publication of the Responsible AI in Recruitment guide by the Department for Science, Innovation, and Technology (DSIT) in March 2024.

Background

The ICO conducted a series of voluntary audits from August 2023 to May 2024. During the audits, the ICO made 296 recommendations, all of which were accepted or partially accepted by the organisations involved. These recommendations address areas such as:

  • Fair processing of personal data,
  • Data minimisation and lawful retention of data, and
  • Transparency in explaining AI logic.

Areas for Improvement

Based on its findings during the audits, the ICO identified several areas for improvement for both AI recruiters and AI providers. The key areas for improvement across both were:Continue Reading ICO Audit on AI Recruitment Tools

On 2 December 2024, the European Data Protection Board (“EDPB”) adopted its draft guidelines on Article 48 GDPR (the “Draft Guidelines”). The Draft Guidelines are intended to provide guidance on the GDPR requirements applicable to private companies in the EU that receive requests or binding demands for personal data from public authorities (e.g., law enforcement or national security agencies, as well as other regulators) located outside the EU.

The Draft Guidelines focus in particular on Article 48 GDPR, which states that a binding demand from a non-EU public authority “requiring a controller or processor to transfer or disclose personal data may only be recognised or enforceable in any manner if based on an international agreement, such as a mutual legal assistance treaty, in force between the requesting third country and the Union or a Member State, without prejudice to other grounds for transfer pursuant to this Chapter.”

As an initial matter, the EDPB addresses the question of whether Article 48 operates as a blocking statute—i.e., a prohibition on disclosure of personal data subject to the GDPR to non-EU public authorities in the absence of an international agreement (e.g., a mutual legal assistance treaty) that permits that disclosure. The Draft Guidelines state that, even in the absence of such an international agreement, companies can in principle disclose personal data in response to such demands, provided that they (a) have a valid legal basis for doing so under Article 6 GDPR, and (b) can validly transfer the personal data outside the EU in accordance with Chapter V GDPR (e.g., on the basis of an EU adequacy decision, “appropriate safeguards”, or one of the derogations set out in Article 49 GDPR). The Draft Guidelines nonetheless make clear that, absent such an international agreement, any demand from a non-EU public authority will not be recognized as a binding demand by, or enforceable in, EU courts.

The Draft Guidelines also provide guidance on the Article 6 legal bases and Chapter V transfer grounds that might apply where a private entity receives a request or demand for personal data from a non-EU public authority. This guidance is broadly consistent with the EDPB’s analysis in its 2019 joint opinion with the EDPS on the CLOUD Act. Of particular note:Continue Reading EDPB adopts draft guidelines on requirements when responding to requests from non-EU public authorities

On November 4, 2024, the European Commission (“Commission”) adopted the implementing regulation on transparency reporting under the Digital Services Act (“DSA”). The implementing regulation is intended to harmonise the format and reporting time periods of the transparency reports required by the DSA.

Transparency reporting is required under Articles 15, 24 and 42 of the DSA. Obligations vary depending on whether the reporting entity is a provider of an intermediary service, hosting service, online platform, very large online platform (“VLOP”) or very large online search engine (“VLOSE”) (collectively, “Providers”).

The implementing regulation requires Providers to use the templates set out in Annex 1 of that regulation when complying with their DSA transparency reporting obligations. Providers must complete and publish this information in accordance with the instructions set out in Annex 2.

The Templates

Annex 1 contains two templates: (1) a “Quantitative Template” consisting of eight sections and (2) a “Qualitative Template” consisting of one section (collectively, the “Templates”):

  • The Quantitative Template is to be used to provide quantitative machine-readable information on content moderation. Each of the eight sections sets out tables where Providers can input standardised information on issues such as Member State orders to act against illegal content, notices submitted under the DSA, own-initiative content moderation, and handling of complaints through their internal complaint mechanisms.
  • The Qualitative Template is to be used to provide qualitative information on content moderation. It requires Providers to input free text descriptions under a range of indicators such as “Summary of the content moderation engaged in at the providers’ own initiative” or “Safeguards applied to the use of automated means.”

Continue Reading European Commission Adopts Implementing Regulation on DSA Transparency Reporting Obligations

By Madelaine Harrington & Marty Hansen on July 17, 2024

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on

Continue Reading EU Artificial Intelligence Act Published

On May 30, 2024, the Court of Justice of the EU (“CJEU”) handed down its rulings in several cases (C-665/22Joined Cases C‑664/22 and C‑666/22C‑663/22, and Joined Cases C‑662/22 and C‑667/22) concerning the compatibility with EU law of certain Italian measures imposing obligations on providers of online platforms and search engines.  In doing so, the CJEU upheld the so-called “country-of-origin” principle, established in the EU’s e-Commerce Directive and based on the EU Treaties principle of free movement of services.  The country-of-origin principle gives the Member State where an online service provider is established exclusive authority (“competence”) to regulate access to, and exercise of, the provider’s services and prevents other Member States from imposing additional requirements.

We provide below an overview of Court’s key findings.

Background

The cases originate from proceedings brought by several online intermediation and search engine service providers (collectively, “providers”) against the Italian regulator for communications (“AGCOM”).  The providers, which are not established in Italy, challenged measures adopted by AGCOM designed to ensure the “adequate and effective enforcement” of the EU Platform-to-Business Regulation (“P2B Regulation”).  Among other things, those measures required the providers, depending on the case, to: (1) enter their business into a national register; (2) provide detailed information, including information about the company’s economic situation, ownership structure, and organization; and (3) pay a financial contribution to the regulator for the purposes of supporting its supervision activities. 

The Country-of-Origin Principle

In its rulings, the Court notes that the e-Commerce Directive’s country-of-origin principle relieves online service providers of having to comply with multiple Member State requirements falling within the so-called “coordinated field” (as defined in Article 2(h)-(i) of e-Commerce Directive), that is, requirements concerning access to the service (such as qualifications, authorizations or notifications), and the provision of the service (such as the provider’s behavior, the quality or content of services). 

Member States other than where the service provider is established cannot restrict the freedom to provide such online services for reasons falling within the coordinated field, unless certain conditions are met.  In particular, measures may be taken when it is necessary for reasons of public policy, protection of public health, public security, or the protection of consumers, among other conditions (Article 3(4) of e-Commerce Directive).Continue Reading CJEU Upholds Country-of-Origin Principle for Online Service Providers in the EU

Although the final text of the EU AI Act should enter into force in the next few months, many of its obligations will only start to apply two or more years after that (for further details, see our earlier blog here). To address this gap, the Commission is encouraging

Continue Reading European Commission Calls on Industry to Commit to the AI Pact in the Run-Up to the European Elections

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its 

Continue Reading EU Parliament Adopts AI Act

A would-be technical development could have potentially significant consequences for cloud service providers established outside the EU. The proposed EU Cybersecurity Certification Scheme for Cloud Services (EUCS)—which has been developed by the EU cybersecurity agency ENISA over the past two years and is expected to be adopted by the European Commission as an implementing act in Q1 2024—would, if adopted in its current form, establish certain requirements that could:

  1. exclude non-EU cloud providers from providing certain (“high” level) services to European companies, and
  2. preclude EU cloud customers from accessing the services of these non-EU providers.

Data Localization and EU Headquarters

The EUCS arises from the EU’s Cybersecurity Act, which called for the creation of an EU-wide security certification scheme for cloud providers, to be developed by ENISA and adopted by the Commission through secondary law (as noted in an earlier blog). After public consultations in 2021, ENISA set up an ad hoc working group tasked with preparing a draft.

France, Italy, and Spain submitted a proposal to the working group advocating to add new criteria to the scheme in order for companies to qualify as eligible to offer services providing the highest level of security. The proposed criteria included localization of cloud services and data within the EU – meaning in essence that providers would need to be headquartered in, and have their cloud services provided from, the EU. Ireland, Sweden and the Netherlands argued that such requirements do not belong in a cybersecurity certification scheme, as requiring cloud providers to be based in Europe reflected political rather than cybersecurity concerns, and therefore proposed that the issue should be discussed by the Council of the EU.Continue Reading Implications of the EU Cybersecurity Scheme for Cloud Services

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and

Continue Reading UK Online Safety Bill Receives Royal Assent