On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals to erase all the data they obtained as a consequence of that unlawful processing.
The hospitals used the AI technology to “profile” their patients, predict whether they may develop certain pathologies, sort them into the corresponding risk group and, based on that, assign a priority class to them in the hospitals’ waiting lists. The hospitals indicated that, in essence, they used the AI for predictive medicine purposes, which is part of their standard healthcare activities (Article 9(2)(h) GDPR).
The Garante, however, disagreed. In particular, it considered that the processing of health data for the purposes of predicting whether a patient may develop certain pathologies “must be considered additional and autonomous to the processing strictly necessary for the standard activities of care and prevention (Article 9(2)(h) of the GDPR), and therefore can be carried out only on the basis of the specific informed consent of the data subject (Article 9(2)(a) of the GDPR).” According to the Garante, the standard activities of prevention and care do not include automated patient profiling and risk scoring in order to develop a risk-stratified care management system.
If confirmed throughout the EU, this restrictive interpretation by the Garante of what qualifies as “preventive medicine” and the “provision of health care” could have important ramifications for the introduction of a wide spectrum of AI and e-health technologies in healthcare.
The Covington Team is happy to provide advice or answer any questions you may have on the topic.