Photo of Kristof Van Quathem

Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.

Now that the EU Artificial Intelligence Act (“AI Act”) has entered into force, the EU institutions are turning their attention to the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the so-called “AI Liability Directive”).  Although the EU Parliament and the Council informally agreed on the text of the proposal in December 2023 (see our previous blog posts here and here), the text of the proposal is expected to change based on a complementary impact assessment published by the European Parliamentary Research Service on September 19.

Brief Overview of the AI Liability Directive

The AI Liability Directive was proposed to establish harmonised rules in fault-based claims (e.g., negligence).  These were to cover the disclosure of evidence on high-risk artificial intelligence (“AI”) systems and the burden of proof including, in certain circumstances, a rebuttable presumption of causation between the fault of the defendant (i.e., the provider or deployer of an AI system) and the output produced by the AI system or the failure of the AI system to produce an output.

Potential Changes to the AI Liability Directive

In July, news reports leaked a slightly amended version of the European Commission’s AI Liability Directive proposal to align the wording with the adopted AI Act (Council document ST 12523 2024 INIT).  The amendments reflect the difference in numbering between the proposed AI Act and the enacted version.

Over the summer, the EU Parliamentary Research Service carried out a complementary impact assessment to evaluate whether the AI Liability Directive should remain on the EU’s list of priorities.  In particular, the new assessment was to determine whether the AI Liability Directive is still needed in light of the proposal for a new Product Liability Directive (see our blog post here).Continue Reading The EU Considers Changing the EU AI Liability Directive into a Software Liability Regulation

On September 12, 2024, the European Commission announced that it will launch a public consultation on additional standard contractual clauses for international transfers of personal data to non-EU controllers and processors that are subject to the EU GDPR extra-territorially (“Additional SCCs”), something that has been promised by the European Commission

Continue Reading EU Commission Announces New SCCs for International Transfers to Non-EU Controllers and Processors Subject to the GDPR

In early March 2024, the EU lawmakers reached agreement on the European Health Data Space (EHDS).  For now, we only have a work-in-progress draft version of the text, but a number of interesting points can already be highlighted.  This article focusses on the obligations of data holders; for an overview of the EHDS generally, see our first post in this series.

We expect the final text of the EHDS to be adopted by the European Parliament in April 2024 and by the EU Member States shortly thereafter.

1: Health data holder

The term “health data holder” includes, among others, any natural or legal person developing products or services intended for health, developing or manufacturing wellness applications, or performing research in relation to healthcare, who:

  • in relation to personal electronic health data: in its capacity of a data controller has the right or obligation to process the health data, including for research and innovation purposes; or
  • in relation to non-personal electronic health data: has the ability to make the data available through control of the technical design of a product and related services.  These terms appear to be taken from the Data Act, but they are not defined under the EHDS.

In practice, this means that, for example, hospitals, as data controllers, are data holders of their electronic health records.  Similarly, pharmaceutical companies are data holders of clinical trial data and biobanks.  Medical device companies may be data holders of non-personal data generated by their devices, if they have access to that data and an ability to produce it.  However, medical device companies would not qualify as a data holder where they merely process personal electronic health data on behalf of a hospital.

Individual researchers and micro enterprises are not data holders, unless EU Member States decide differently for their territory.

2: Data sets covered by EHDS

The EHDS sets out a long list of covered electronic health data that should be made available for secondary use under the EHDS.  It includes, among others:

  • electronic health records;
  • human genetic data;
  • biobanks;
  • data from wellness applications;
  • clinical trial data – though according to the recitals, this only applies when the trial has ended;
  • medical device data;
  • data from registries; and
  • data from research cohorts and surveys, after the first publication of the results – a qualifier that does not seem to apply for clinical trial data.

Continue Reading EHDS Series – 2: The European Health Data Space from the Health Data Holder’s Perspective

In December 2023, the Dutch SA fined a credit card company €150,000 for failure to perform a proper data protection impact assessment (“DPIA”) in accordance with Art. 35 GDPR for its “identification and verification process”.

First, the Dutch SA decided that the company was required to perform a DPIA because

Continue Reading Dutch SA Sanctions Credit Card Company for Failure to Perform Data Protection Impact Assessment

EU advocate general Collins has reiterated that individuals’ right to claim compensation for harm caused by GDPR breaches requires proof of “actual damage suffered” as a result of the breach, and “clear and precise evidence” of such damage – mere hypothetical harms or discomfort are insufficient. The advocate general also

Continue Reading EU Advocate General Defines “Identity Theft” And Reaffirms GDPR Compensation Threshold

On October 26, 2023, the European Court of Justice (“CJEU”) decided that the GDPR grants a patient the right to obtain a copy of his or her medical record free of charge (case C-307/22, FT DW).   As a result, the CJEU held that a provision under German

Continue Reading CJEU Holds That GDPR Right of Access Overrules Local Laws

On October 12, 2023 the Italian Data Protection Authority (“Garante”) published guidance on the use of AI in healthcare services (“Guidance”).  The document builds on principles enshrined in the GPDR, national and EU case-law.  Although the Guidance focuses on Italian national healthcare services, it offers considerations relevant to the use

Continue Reading Italian Garante Issues Guidance on the Use of AI in the Context of National Healthcare Services

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act  (for

Continue Reading Spain Creates AI Regulator to Enforce the AI Act

On April 17, 2023, the Italian Supervisory Authority (“Garante”) published its decision against a company operating digital marketing services finding several GDPR violations, including the use of so-called “dark-patterns” to obtain users’ consent.  The Garante imposed a fine of 300.000 EUR. 

We provide below a brief overview of the Garante’s key findings.

Background

The sanctioned company operated marketing campaigns on behalf of its clients, via text messages, emails and automated calls.  The company’s database of contacts was formed by data collected directly through its online portals (offering news, sweepstakes and trivia), as well as data purchased from data brokers.

Key Findings

Dark patterns.  The Garante found that, during the subscription process, the user was asked for specific consent relating to marketing purposes and sharing of data with third parties for marketing.  If the user did not select either of the checkboxes, a banner would pop-up, indicating the lack of consent, and displaying a prominent consent button.  The site also displayed a “continue without accepting” option, but this was placed at the bottom of the webpage – outside of the pop-up banner – in simple text form and smaller font size, which made it less visible than the “consent” button.  The Garante, referring to the EDPB’s guidelines (see our blogpost here), held that the use of such interfaces and graphic elements constituted “dark patterns” with the aim of pushing individuals towards providing consent.

Double opt-in.  The Garante noted that consent was not adequately documented.  While the company argued that it required a “double opt-in”, the evidence showed that a confirmation request was not consistently sent out to users.  The Garante recalled that double opt-in is not a mandatory requirement in Italy, but constitutes nonetheless an appropriate method to document consent.Continue Reading Italian Garante Fines Digital Marketing Company Over Use of Dark Patterns

On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals

Continue Reading Italian Garante Fines Three Hospitals Over Their Use of AI for Risk Stratification Purposes, Establishes That Predictive Medicine Processing Requires the Patient’s Explicit Consent