Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

I. ICO Fines Clearview AI £7.5m

In the past year, Clearview has been subject to investigations into its data processing activities by the French and Italian authorities, and a joint investigation by the ICO and the Australian Information Commissioner. All four regulators held that Clearview’s processing of biometric data scraped from over 20 billion facial images from across the internet, including from social media sites, breached data protection laws.

On 26 May 2022, the ICO released its monetary penalty notice and enforcement notice against Clearview. The ICO concluded that Clearview’s activities infringed a number of the GDPR and UK GDPR’s provisions, including:

  • Failing to process data in a way that is fair and transparent under Article 5(1)(a) GDPR. The ICO concluded that people were not made aware or would not reasonably expect their images to be scraped, added to a worldwide database, and made available to a wide range of customers for the purpose of matching images on the company’s database.
  • Failing to process data in a way that is lawful under the GDPR. The ICO ruled that Clearview’s processing did not meet any of the conditions for lawful processing set out in Article 6, nor, for biometric data, in Article 9(2) GDPR.
  • Failing to have a data retention policy and thus being unable to ensure that personal data are not retained for longer than necessary under Article 5(1)(e) GDPR. There was no indication as to when (or whether) any images are ever removed from Clearview’s database.
  • Failing to provide data subjects with the necessary information under Article 14 GDPR. According to the ICO’s investigation, the only way in which data subjects could obtain that information was by contacting Clearview and directly requesting it.
  • Impeding the exercise of data subject rights under Articles 15, 16, 17, 21 and 22 GDPR. In order to exercise these rights, data subjects needed to provide Clearview with additional personal data, by providing a photograph of themselves that can be matched against the Clearview Database.
  • Failing to conduct a Data Protection Impact Assessment (“DPIA”) under Article 35 GDPR. The ICO found that Clearview failed at any time to conduct a DPIA in respect of its processing of the personal data of UK residents.

The ICO decided to fine Clearview £7.5 million, in contrast to the £17 million fine initially suggested. The monetary penalty notice sets out the ICO’s reasoning behind the fine. As Clearview did not provide any figures for its income or turnover, the ICO was unable to calculate Clearview’s financial gain from the activities in question, which would ordinarily serve as a step in calculating the fine amount. The ICO had regard to the range of penalties available to it, and set an initial amount at just below the mid-point of this range, amounting to £7,552,800. The ICO then considered the other statutory factors, and did not consider that any of these justified either an increase or a reduction from the initial starting point. This included consideration of certain representations made by Clearview that it had acted on requests from UK data subjects to exclude their images from future searches.

Although Clearview no longer offers its services to UK organisations, due to the perceived risk of data concerning UK residents being used in the company’s offerings elsewhere, the ICO ordered Clearview to take the following steps in addition to paying the £7.5 million fine:

  • Deleting the personal data of UK residents from its systems, within six months of the expiry of the appeal period.
  • Refraining from any further processing of the personal data of data subjects resident in the UK, within three months following the date of the expiry of the appeal period.
  • Refraining from offering any service provided by way of its database to any UK customer.
  • Refraining from doing anything in the future that would fall under the above points without first drafting a DPIA, and providing this to the ICO.

The ICO imposed a similar deletion order on HMRC in 2019, in which it gave the tax authority 28 days to delete all biometric voice data for which it did not have explicit consent to process.

II. EDPB Publishes New Draft Guidelines on the Use of FRT in Law Enforcement

On 12 May 2022, the EDPB adopted draft guidelines providing guidance to law makers and law enforcement authorities (“LEAs”) on implementing and using FRT systems. The guidelines provide that FRT should only be used in compliance with the Law Enforcement Directive (“LED”) and only in a necessary and proportionate manner, as set out in the Charter of Fundamental Rights. The EDPB’s draft guidelines are currently open for public consultation until 27 June. If adopted, they will impact the requests for data, software, and other technology that the EU and LEAs can make of private companies.

The EDPB makes several references to processing of personal data in a law enforcement context that relies on databases, similar to Clearview’s, populated by “scraping” photographs accessible online on a mass scale, including calling for a ban on LEAs’ use of such databases. The EDPB notes that when assessing whether processing relates to data which are “manifestly made public by the data subject” (a lawful ground for processing biometric data under Article 10 LED), the fact that a photograph is “manifestly made public” does not mean that the related biometric data which can be retrieved from the photograph using FRT tools has also been “manifestly made public”. For biometric data to be seen as “manifestly made public,” the data subject must have deliberately made their biometric data freely accessible and public through an open source. Further, the EDPB notes that default settings of a service (e.g., if data is made public by default on a social networking platform) should not be construed as data “manifestly made public.”

The guidelines repeat the EDPB’s prior call for a ban on the use of FRT in certain cases, specifically:

  • Remote biometric identification of individuals in publicly accessible spaces;
  • ­FRT categorizing individuals based on their biometrics into clusters according to ethnicity, gender, as well as political or sexual orientation or other grounds for discrimination;
  • ­FRT to infer emotions of a natural person; and
  • As described above, processing of personal data in a law enforcement context that relies on a database populated by the collection of personal data on a mass-scale and in an indiscriminate way.

The EU is also currently debating whether to prohibit certain forms of “real time” remote biometric identification systems in the context of its proposal for a Regulation laying down harmonized rules on artificial intelligence (the “EU AI Act”) (see our blog here for further details). The EDPB and the European Data Protection Supervisor published a Joint Opinion on the EU AI Act last year.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Mark Young Mark Young

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the…

Mark Young is an experienced tech regulatory lawyer and a vice-chair of Covington’s Data Privacy and Cybersecurity Practice Group. He advises major global companies on their most challenging data privacy compliance matters and investigations. Mark also leads on EMEA cybersecurity matters at the firm. In these contexts, he has worked closely with some of the world’s leading technology and life sciences companies and other multinationals.

Mark has been recognized for several years in Chambers UK as “a trusted adviser – practical, results-oriented and an expert in the field;” “fast, thorough and responsive;” “extremely pragmatic in advice on risk;” “provides thoughtful, strategic guidance and is a pleasure to work with;” and has “great insight into the regulators.” According to the most recent edition (2024), “He’s extremely technologically sophisticated and advises on true issues of first impression, particularly in the field of AI.”

Drawing on over 15 years of experience, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology, e.g., AI, biometric data, and connected devices.
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
  • Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • Counseling ad networks (demand and supply side), retailers, and other adtech companies on data privacy compliance relating to programmatic advertising, and providing strategic advice on complaints and claims in a range of jurisdictions.
  • Advising life sciences companies on industry-specific data privacy issues, including:
    • clinical trials and pharmacovigilance;
    • digital health products and services; and
    • engagement with healthcare professionals and marketing programs.
  • International conflict of law issues relating to white collar investigations and data privacy compliance (collecting data from employees and others, international transfers, etc.).
  • Advising various clients on the EU NIS2 Directive and UK NIS regulations and other cybersecurity-related regulations, particularly (i) cloud computing service providers, online marketplaces, social media networks, and other digital infrastructure and service providers, and (ii) medical device and pharma companies, and other manufacturers.
  • Helping a broad range of organizations prepare for and respond to cybersecurity incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, supply chain incidents, and state-sponsored attacks. Mark’s incident response expertise includes:
    • supervising technical investigations and providing updates to company boards and leaders;
    • advising on PR and related legal risks following an incident;
    • engaging with law enforcement and government agencies; and
    • advising on notification obligations and other legal risks, and representing clients before regulators around the world.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of UK and EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Fredericka Argent Fredericka Argent

Fredericka Argent is a special counsel in Covington’s technology regulatory group in London. She advises leading multinationals on some of their most complex regulatory, policy and compliance-related issues, including data protection, copyright and the moderation of online content.

Fredericka regularly provides strategic advice…

Fredericka Argent is a special counsel in Covington’s technology regulatory group in London. She advises leading multinationals on some of their most complex regulatory, policy and compliance-related issues, including data protection, copyright and the moderation of online content.

Fredericka regularly provides strategic advice to companies on complying with data protection laws in the UK and Europe, as well as defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. She advises global technology and software companies on EU copyright and database rights rules, including the implications of legislative developments on their business. She also counsels clients on a range of policy initiatives and legislation that affect the technology sector, such as the moderation of harmful or illegal content online, rules affecting the audiovisual media sector and EU accessibility laws.

Fredericka represents right owners in the publishing, software and life sciences industries on online IP enforcement matters, and helps coordinate an in-house internet investigations team who conduct global monitoring, reporting, notice and takedown programs to combat Internet piracy.