Photo of Fredericka Argent

Fredericka Argent

Fredericka Argent is a special counsel in Covington’s technology regulatory group in London. She advises leading multinationals on some of their most complex regulatory, policy and compliance-related issues, including data protection, copyright and the moderation of online content.

Fredericka regularly provides strategic advice to companies on complying with data protection laws in the UK and Europe, as well as defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. She advises global technology and software companies on EU copyright and database rights rules, including the implications of legislative developments on their business. She also counsels clients on a range of policy initiatives and legislation that affect the technology sector, such as the moderation of harmful or illegal content online, rules affecting the audiovisual media sector and EU accessibility laws.

Fredericka represents right owners in the publishing, software and life sciences industries on online IP enforcement matters, and helps coordinate an in-house internet investigations team who conduct global monitoring, reporting, notice and takedown programs to combat Internet piracy.

The European Commission (“Commission”) recently launched two stakeholder consultations under the EU AI Act. The first (see here), closing on 9 January 2026, relates to the copyright-related obligations for General Purpose AI (“GPAI”) providers under the AI Act and GPAI Code of Practice. The second (see here)

Continue Reading European Commission Launches Consultations on the EU AI Act’s Copyright Provisions and AI Regulatory Sandboxes

On 28 June 2025, the European Accessibility Act (“EAA”)—a 2019 directive—will begin applying to covered products and services.  The EAA imposes various obligations on technology and online service providers among others, requiring them to ensure that the products and services that they offer in the EU are made accessible to consumers with disabilities. According to its recitals, the goal of the EAA is to increase the availability of accessible products and services in the EU and improve the accessibility of information provided to consumers about those products and services.

In 2015, following a consultation and impact assessment, the European Commission proposed the draft EAA. EU lawmakers were motivated by the fact that rules governing the accessibility requirements for products and services were fragmented across the EU, leading to a poorer experience for consumers and undermining the EU’s commitment to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD). The EAA was eventually adopted in 2019. As a Directive, the EAA must be transposed into national law by each Member State (the deadline for doing so was June 2022).

Covered products and services

Article 2 of the EAA identifies five categories of products and six categories of services that are subject to harmonized accessibility rules.

The covered products are:

  • General purpose computer hardware and operating systems,
  • Self-service terminals (such as ATMs),
  • Devices used for electronic communications services,
  • Devices used to access audiovisual media services, and
  • E-readers.

The covered services are:

  • Electronic communications services (except for those used to provide machine-to-machine services),
  • Services providing access to audiovisual media services,
  • Elements relating to passenger transport services, such as websites and electronic ticketing,
  • Consumer banking services
  • E-books and dedicated software, and
  • E-commerce services.

Overview of the accessibility obligations

The substantive accessibility obligations that apply to product manufacturers and service providers are set out in Annexes to the EAA. At a general level, product manufacturers are required to ensure that products they provide to consumers are “designed and produced in such a way as to maximize their foreseeable use by persons with disabilities” and that they are accompanied by “accessible information on their functioning and on their accessibility features” (Annex I, Section 1). Annex I, Section I goes on to describe in some detail the specific requirements for making products accessible to consumers with a range of disabilities, such as visual and hearing impairments, as well ensuring that products are designed so that they can be used by individuals with reduced motor control and/or strength. Annex I, Sections I and II also place accessibility obligations on product manufacturers with respect to the information that they provide to consumers – for instance, by requiring that instructions for use, installation, maintenance, storage and disposal of products are made available using accessible formats.Continue Reading European Accessibility Act: June 2025 deadline has arrived

In case you missed it before the holidays: on 17 December 2024, the UK Government published a consultation on “Copyright and Artificial Intelligence” in which it examines proposals to change the UK’s copyright framework in light of the growth of the artificial intelligence (“AI”) sector.   

The Government sets out the following core objectives for a new copyright and AI framework:

  • Support right holders’ control of their content and, specifically, their ability to be remunerated when AI developers use that content, such as via licensing regimes;
  • Support the development of world-leading AI models in the UK, including by facilitating AI developers’ ability to access and use large volumes of online content to train their models; and
  • Promote greater trust between the creative and AI sectors (and among consumers) by introducing transparency requirements on AI developers about the works they are using to train AI models, and potentially requiring AI-generated outputs to be labelled.

In this post, we consider some of the most noteworthy aspects of the Government’s proposal.

  • The proposed regime would include a new text and data mining (TDM) exception

First and foremost, the Government is contemplating the introduction of a new TDM exception that would apply to TDM conducted for any purpose, including commercial purposes. The Government does not set out how it would define TDM, but refers to data mining as “the use of automated techniques to analyse large amounts of information (for AI training or other purposes)”. This new exception would apply where:Continue Reading UK Government Proposes Copyright & AI Reform 

Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

I. ICO Fines Clearview AI £7.5m

In the past year, Clearview has been subject to investigations into its data processing activities by the French and Italian authorities, and a joint investigation by the ICO and the Australian Information Commissioner. All four regulators held that Clearview’s processing of biometric data scraped from over 20 billion facial images from across the internet, including from social media sites, breached data protection laws.

On 26 May 2022, the ICO released its monetary penalty notice and enforcement notice against Clearview. The ICO concluded that Clearview’s activities infringed a number of the GDPR and UK GDPR’s provisions, including:

  • Failing to process data in a way that is fair and transparent under Article 5(1)(a) GDPR. The ICO concluded that people were not made aware or would not reasonably expect their images to be scraped, added to a worldwide database, and made available to a wide range of customers for the purpose of matching images on the company’s database.
  • Failing to process data in a way that is lawful under the GDPR. The ICO ruled that Clearview’s processing did not meet any of the conditions for lawful processing set out in Article 6, nor, for biometric data, in Article 9(2) GDPR.
  • Failing to have a data retention policy and thus being unable to ensure that personal data are not retained for longer than necessary under Article 5(1)(e) GDPR. There was no indication as to when (or whether) any images are ever removed from Clearview’s database.
  • Failing to provide data subjects with the necessary information under Article 14 GDPR. According to the ICO’s investigation, the only way in which data subjects could obtain that information was by contacting Clearview and directly requesting it.
  • Impeding the exercise of data subject rights under Articles 15, 16, 17, 21 and 22 GDPR. In order to exercise these rights, data subjects needed to provide Clearview with additional personal data, by providing a photograph of themselves that can be matched against the Clearview Database.
  • Failing to conduct a Data Protection Impact Assessment (“DPIA”) under Article 35 GDPR. The ICO found that Clearview failed at any time to conduct a DPIA in respect of its processing of the personal data of UK residents.

Continue Reading Facial Recognition Update: UK ICO Fines Clearview AI £7.5m & EDPB Adopts Draft Guidelines on Use of FRT by Law Enforcement