Photo of Mark Young

Mark Young

Mark Young, an experienced tech regulatory lawyer, advises major global companies on their most challenging data privacy compliance matters and investigations.

Mark also leads on EMEA cybersecurity matters at the firm. He advises on evolving cyber-related regulations, and helps clients respond to incidents, including personal data breaches, IP and trade secret theft, ransomware, insider threats, and state-sponsored attacks.

Mark has been recognized in Chambers UK for several years as "a trusted adviser - practical, results-oriented and an expert in the field;" "fast, thorough and responsive;" "extremely pragmatic in advice on risk;" and having "great insight into the regulators."

Drawing on over 15 years of experience advising global companies on a variety of tech regulatory matters, Mark specializes in:

  • Advising on potential exposure under GDPR and international data privacy laws in relation to innovative products and services that involve cutting-edge technology (e.g., AI, biometric data, Internet-enabled devices, etc.).
  • Providing practical guidance on novel uses of personal data, responding to individuals exercising rights, and data transfers, including advising on Binding Corporate Rules (BCRs) and compliance challenges following Brexit and Schrems II.
    Helping clients respond to investigations by data protection regulators in the UK, EU and globally, and advising on potential follow-on litigation risks.
  • GDPR and international data privacy compliance for life sciences companies in relation to:
    clinical trials and pharmacovigilance;

    • digital health products and services; and
    • marketing programs.
    • International conflict of law issues relating to white collar investigations and data privacy compliance.
  • Cybersecurity issues, including:
    • best practices to protect business-critical information and comply with national and sector-specific regulation;
      preparing for and responding to cyber-based attacks and internal threats to networks and information, including training for board members;
    • supervising technical investigations; advising on PR, engagement with law enforcement and government agencies, notification obligations and other legal risks; and representing clients before regulators around the world; and
    • advising on emerging regulations, including during the legislative process.
  • Advising clients on risks and potential liabilities in relation to corporate transactions, especially involving companies that process significant volumes of personal data (e.g., in the adtech, digital identity/anti-fraud, and social network sectors.)
  • Providing strategic advice and advocacy on a range of EU technology law reform issues including data privacy, cybersecurity, ecommerce, eID and trust services, and software-related proposals.
  • Representing clients in connection with references to the Court of Justice of the EU.

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its proposal for the Act.

Here’s

Yesterday, the European Commission, Council and Parliament announced that they had reached an agreement on the text of the Cyber Resilience Act (“CRA”). As a result, the CRA now looks set to finish its journey through the EU legislative process early next year. As we explained in our prior post about the Commission proposal

A would-be technical development could have potentially significant consequences for cloud service providers established outside the EU. The proposed EU Cybersecurity Certification Scheme for Cloud Services (EUCS)—which has been developed by the EU cybersecurity agency ENISA over the past two years and is expected to be adopted by the European Commission as an implementing act in Q1 2024—would, if adopted in its current form, establish certain requirements that could:

  1. exclude non-EU cloud providers from providing certain (“high” level) services to European companies, and
  2. preclude EU cloud customers from accessing the services of these non-EU providers.

Data Localization and EU Headquarters

The EUCS arises from the EU’s Cybersecurity Act, which called for the creation of an EU-wide security certification scheme for cloud providers, to be developed by ENISA and adopted by the Commission through secondary law (as noted in an earlier blog). After public consultations in 2021, ENISA set up an ad hoc working group tasked with preparing a draft.

France, Italy, and Spain submitted a proposal to the working group advocating to add new criteria to the scheme in order for companies to qualify as eligible to offer services providing the highest level of security. The proposed criteria included localization of cloud services and data within the EU – meaning in essence that providers would need to be headquartered in, and have their cloud services provided from, the EU. Ireland, Sweden and the Netherlands argued that such requirements do not belong in a cybersecurity certification scheme, as requiring cloud providers to be based in Europe reflected political rather than cybersecurity concerns, and therefore proposed that the issue should be discussed by the Council of the EU.Continue Reading Implications of the EU Cybersecurity Scheme for Cloud Services

On July 10, 2023, the European Commission adopted its adequacy decision on the EU-U.S. Data Privacy Framework (“DPF”). The decision, which took effect on the day of its adoption, concludes that the United States ensures an adequate level of protection for personal data transferred from the EEA to companies certified to the DPF. This blog post summarizes the key findings of the decision, what organizations wishing to certify to the DPF need to do and the process for certifying, as well as the impact on other transfer mechanisms such as the standard contractual clauses (“SCCs”), and on transfers from the UK and Switzerland.

Background

The Commission’s adoption of the adequacy decision follows three key recent developments:

  1. the endorsement of the draft decision by a committee of EU Member State representatives;
  2. the designation by the U.S. Department of Justice of the European Union and Iceland, Liechtenstein, and Norway (which together with the EU form the EEA) as “qualifying states,” for the purposes of President Biden’s Executive Order 14086 on Enhancing Safeguards for U.S. Signals Intelligence Activities (“EO 14086”). This designation enables EU data subjects to submit complaints concerning alleged violations of U.S. law governing signals intelligence activities to the redress mechanism set forth in the Executive Order and implementing regulations (see our previous blog post here); and
  3. updates to the U.S. Intelligence Community’s policies and procedures to implement the safeguards established under EO 14086, announced by the U.S. Office of Director of National Intelligence on July 3, 2023.

The final adequacy decision, which largely corresponds to the Commission’s draft decision (see our prior blog post here), concludes “the United States … ensures a level of protection for personal data transferred from the Union to certified organisations in the United States under the EU-U.S. Data Privacy Framework that is essentially equivalent to the one guaranteed by [the GDPR]” (para. 201).

Key Findings of the Decision

In reaching the final decision, the Commission confirms a few key points:Continue Reading European Commission Adopts Adequacy Decision on the EU-U.S. Data Privacy Framework

On September 15, 2022, the European Commission published a draft regulation that sets out cybersecurity requirements for “products with digital elements” (PDEs) placed on the EU market—the Cyber Resilience Act (CRA). The Commission has identified that cyberattacks are increasing in the EU, with an estimated global annual cost of €5.5 trillion. The CRA aims to strengthen the security of PDEs and imposes obligations that cover:

  1. the planning, design, development, production, delivery and maintenance of PDEs;
  2. the prevention and handling of cyber vulnerabilities; and
  3. the provision of cybersecurity information to users of PDEs.

The CRA also imposes obligations to report any actively exploited vulnerability as well as any incident that impacts the security of a PDE to ENISA within 24 hours of becoming aware of it.

The obligations apply primarily to manufacturers of PDEs, which include entities that develop or manufacture PDEs as well as entities that outsource the design, development and manufacturing to a third party. Importers and distributors of PDEs also need to ensure that the products comply with CRA’s requirements.Continue Reading EU Publishes Draft Cyber Resilience Act

The UK Government’s (UKG) proposals for new, sector-specific cybersecurity rules continue to take shape. Following the announcement of a Product Security and Telecommunications Infrastructure Bill and a consultation on the security of apps and app stores in the Queen’s Speech (which we briefly discuss here), the UKG issued a call for views on whether action is needed to ensure cyber security in data centres and cloud services (described here).

In recent weeks, the UKG has made two further announcements:

  • On 30 August 2022, it issued a response to its public consultation on the draft Electronic Communications (Security measures) Regulations 2022 (Draft Regulations) and a draft Telecommunications Security code of practice (COP), before laying a revised version of the Draft Regulations before Parliament on 5 September.
  • On 1 September 2022, it issued a call for information on the risks associated with unauthorized access to individuals’ online accounts and personal data, and measures that could be taken to limit that risk.

We set out below further detail on these latest developments.

*****Continue Reading A packed end to the UK’s cyber summer: Government moves forward with telecoms cybersecurity proposals and consults on a Cyber Duty to Protect

Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

I. ICO Fines Clearview AI £7.5m

In the past year, Clearview has been subject to investigations into its data processing activities by the French and Italian authorities, and a joint investigation by the ICO and the Australian Information Commissioner. All four regulators held that Clearview’s processing of biometric data scraped from over 20 billion facial images from across the internet, including from social media sites, breached data protection laws.

On 26 May 2022, the ICO released its monetary penalty notice and enforcement notice against Clearview. The ICO concluded that Clearview’s activities infringed a number of the GDPR and UK GDPR’s provisions, including:

  • Failing to process data in a way that is fair and transparent under Article 5(1)(a) GDPR. The ICO concluded that people were not made aware or would not reasonably expect their images to be scraped, added to a worldwide database, and made available to a wide range of customers for the purpose of matching images on the company’s database.
  • Failing to process data in a way that is lawful under the GDPR. The ICO ruled that Clearview’s processing did not meet any of the conditions for lawful processing set out in Article 6, nor, for biometric data, in Article 9(2) GDPR.
  • Failing to have a data retention policy and thus being unable to ensure that personal data are not retained for longer than necessary under Article 5(1)(e) GDPR. There was no indication as to when (or whether) any images are ever removed from Clearview’s database.
  • Failing to provide data subjects with the necessary information under Article 14 GDPR. According to the ICO’s investigation, the only way in which data subjects could obtain that information was by contacting Clearview and directly requesting it.
  • Impeding the exercise of data subject rights under Articles 15, 16, 17, 21 and 22 GDPR. In order to exercise these rights, data subjects needed to provide Clearview with additional personal data, by providing a photograph of themselves that can be matched against the Clearview Database.
  • Failing to conduct a Data Protection Impact Assessment (“DPIA”) under Article 35 GDPR. The ICO found that Clearview failed at any time to conduct a DPIA in respect of its processing of the personal data of UK residents.

Continue Reading Facial Recognition Update: UK ICO Fines Clearview AI £7.5m & EDPB Adopts Draft Guidelines on Use of FRT by Law Enforcement

In the early hours of Friday, 13 May, the European Parliament and the Council of the EU reached provisional political agreement on a new framework EU cybersecurity law, known as “NIS2”. This new law, which will replace the existing NIS Directive (which was agreed around the same time as GDPR, see here) aims to strengthen EU-wide cybersecurity protection across a broader range of sectors, including the pharmaceutical sector, medical device manufacturing, and the food sector.

We set out background on NIS2 in prior blog posts (e.g., in relation to the original proposal in late 2020, see here, and more recently when the Council of the EU adopted an updated version in December 2021). Whilst we are still waiting for the provisionally agreed text to be released, a few points are worth mentioning from this latest agreement:

  • Clearer delineation of scope. NIS2 will only apply to entities that meet certain size thresholds in the prescribed sectors, namely
    • “essential entities” meaning those operating in the following sectors: energy; transport; banking; financial market infrastructures; health (including the manufacture of pharmaceutical products); drinking water; waste water; digital infrastructure (internet exchange points; DNS providers; TLD name registries; cloud computing service providers; data centre service providers; content delivery networks; trust service providers; and public electronic communications networks and electronic communications services); public administration; and space; and
    • “important entities”, meaning those operating in the following sectors: postal and courier services; waste management; chemicals; food; manufacturing of medical devices, computers and electronics, machinery equipment, motor vehicles; and digital providers (online market places, online search engines, and social networking service platforms).

Continue Reading Political Agreement Reached on New EU Horizontal Cybersecurity Directive

As many readers will be aware, a key enforcement trend in the privacy sphere is the increasing scrutiny by regulators and activists of cookie banners and the use of cookies. This is a topic that we have been tracking on the Inside Privacy blog for some timeItalian and German data protection authorities have

On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).

The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions