Photo of Fredericka Argent

Fredericka Argent

Fredericka Argent is a special counsel in Covington’s technology regulatory group in London. She advises leading multinationals on some of their most complex regulatory, policy and compliance-related issues, including data protection, copyright and the moderation of online content.

Fredericka regularly provides strategic advice to companies on complying with data protection laws in the UK and Europe, as well as defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. She advises global technology and software companies on EU copyright and database rights rules, including the implications of legislative developments on their business. She also counsels clients on a range of policy initiatives and legislation that affect the technology sector, such as the moderation of harmful or illegal content online, rules affecting the audiovisual media sector and EU accessibility laws.

Fredericka represents right owners in the publishing, software and life sciences industries on online IP enforcement matters, and helps coordinate an in-house internet investigations team who conduct global monitoring, reporting, notice and takedown programs to combat Internet piracy.

In case you missed it before the holidays: on 17 December 2024, the UK Government published a consultation on “Copyright and Artificial Intelligence” in which it examines proposals to change the UK’s copyright framework in light of the growth of the artificial intelligence (“AI”) sector.   

The Government sets out the following core objectives for a new copyright and AI framework:

  • Support right holders’ control of their content and, specifically, their ability to be remunerated when AI developers use that content, such as via licensing regimes;
  • Support the development of world-leading AI models in the UK, including by facilitating AI developers’ ability to access and use large volumes of online content to train their models; and
  • Promote greater trust between the creative and AI sectors (and among consumers) by introducing transparency requirements on AI developers about the works they are using to train AI models, and potentially requiring AI-generated outputs to be labelled.

In this post, we consider some of the most noteworthy aspects of the Government’s proposal.

  • The proposed regime would include a new text and data mining (TDM) exception

First and foremost, the Government is contemplating the introduction of a new TDM exception that would apply to TDM conducted for any purpose, including commercial purposes. The Government does not set out how it would define TDM, but refers to data mining as “the use of automated techniques to analyse large amounts of information (for AI training or other purposes)”. This new exception would apply where:Continue Reading UK Government Proposes Copyright & AI Reform 

Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

I. ICO Fines Clearview AI £7.5m

In the past year, Clearview has been subject to investigations into its data processing activities by the French and Italian authorities, and a joint investigation by the ICO and the Australian Information Commissioner. All four regulators held that Clearview’s processing of biometric data scraped from over 20 billion facial images from across the internet, including from social media sites, breached data protection laws.

On 26 May 2022, the ICO released its monetary penalty notice and enforcement notice against Clearview. The ICO concluded that Clearview’s activities infringed a number of the GDPR and UK GDPR’s provisions, including:

  • Failing to process data in a way that is fair and transparent under Article 5(1)(a) GDPR. The ICO concluded that people were not made aware or would not reasonably expect their images to be scraped, added to a worldwide database, and made available to a wide range of customers for the purpose of matching images on the company’s database.
  • Failing to process data in a way that is lawful under the GDPR. The ICO ruled that Clearview’s processing did not meet any of the conditions for lawful processing set out in Article 6, nor, for biometric data, in Article 9(2) GDPR.
  • Failing to have a data retention policy and thus being unable to ensure that personal data are not retained for longer than necessary under Article 5(1)(e) GDPR. There was no indication as to when (or whether) any images are ever removed from Clearview’s database.
  • Failing to provide data subjects with the necessary information under Article 14 GDPR. According to the ICO’s investigation, the only way in which data subjects could obtain that information was by contacting Clearview and directly requesting it.
  • Impeding the exercise of data subject rights under Articles 15, 16, 17, 21 and 22 GDPR. In order to exercise these rights, data subjects needed to provide Clearview with additional personal data, by providing a photograph of themselves that can be matched against the Clearview Database.
  • Failing to conduct a Data Protection Impact Assessment (“DPIA”) under Article 35 GDPR. The ICO found that Clearview failed at any time to conduct a DPIA in respect of its processing of the personal data of UK residents.

Continue Reading Facial Recognition Update: UK ICO Fines Clearview AI £7.5m & EDPB Adopts Draft Guidelines on Use of FRT by Law Enforcement