On April 17, 2023, the Italian Supervisory Authority (“Garante”) published its decision against a company operating digital marketing services finding several GDPR violations, including the use of so-called “dark-patterns” to obtain users’ consent.  The Garante imposed a fine of 300.000 EUR. 

We provide below a brief overview of the Garante’s key findings.

Background

The sanctioned company operated marketing campaigns on behalf of its clients, via text messages, emails and automated calls.  The company’s database of contacts was formed by data collected directly through its online portals (offering news, sweepstakes and trivia), as well as data purchased from data brokers.

Key Findings

Dark patterns.  The Garante found that, during the subscription process, the user was asked for specific consent relating to marketing purposes and sharing of data with third parties for marketing.  If the user did not select either of the checkboxes, a banner would pop-up, indicating the lack of consent, and displaying a prominent consent button.  The site also displayed a “continue without accepting” option, but this was placed at the bottom of the webpage – outside of the pop-up banner – in simple text form and smaller font size, which made it less visible than the “consent” button.  The Garante, referring to the EDPB’s guidelines (see our blogpost here), held that the use of such interfaces and graphic elements constituted “dark patterns” with the aim of pushing individuals towards providing consent.

Double opt-in.  The Garante noted that consent was not adequately documented.  While the company argued that it required a “double opt-in”, the evidence showed that a confirmation request was not consistently sent out to users.  The Garante recalled that double opt-in is not a mandatory requirement in Italy, but constitutes nonetheless an appropriate method to document consent.

Continue Reading Italian Garante Fines Digital Marketing Company Over Use of Dark Patterns

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

Continue Reading China Proposes Draft Measures to Regulate Generative AI

On March 7, 2023, during the annual National People’s Congress (“NPC”) sessions, China’s State Council revealed its plan to establish a National Data Bureau (NDB) as part of a broader reorganization of government agencies. The plan is being deliberated by the NPC and is expected to be finalized soon. 

According to the draft plan, the new National Data Bureau will be a deputy ministry-level agency under the National Development and Reform Commission (“NDRC”), China’s main economic planning agency that is in charge of industrial policies.  The new bureau will be responsible for, among other areas, “coordinating the integration, sharing, development, and utilization of data resources,” and “pushing forward the planning and building of a Digital China, a digital economy, and a digital society.” 

The plan specifies the new agency will take over certain portfolios currently managed by the Communist Party’s Central Cyberspace Affairs Commission (the party organ that supervises the Cyberspace Administration of China, “CAC”) and the NDRC. Specifically, the NDB will assume responsibility for “coordinating the development, utilization, and sharing of important national data resources, and promoting the exchange of data resources across industries and across departments,” a function currently performed by CAC.  The NDB will also absorb the NDRC teams responsible for promoting the development of the digital economy and implementing the national “big data” strategy.

Continue Reading China Reveals Plan to Establish a National Data Bureau

On February 9, 2023, the Court of Justice of the EU (“CJEU”) released two separate rulings on the dismissal of data protection officers (“DPOs”) under the German Federal Data Protection Law (“German DPL”) (C-453/21 and C-560/21).  The main question in both cases was whether Section 6(4) of the German DPL which permits the dismissal of a DPO with “just cause” is compatible with the GDPR.  In short, the CJEU (i) found that the provision was compatible with the GDPR because EU member states can use “just cause” as a threshold for dismissal as long as this does not undermine the objectives set for DPOs under the GDPR, and (ii) clarified the criteria EU member states should take into account to determine whether there is a conflict of interest.

The CJEU rulings concerned DPOs who were employed at German companies and dismissed “for just cause” from their respective DPO positions due to conflicts of interest concerns.  In one case, the DPO was simultaneously chair of the company’s works council.  In the other case, there was a perceived incompatibility with the DPO’s other professional responsibilities at the company (which the judgment does not disclose).  Importantly, the DPOs had not been dismissed because of the way they performed their duties and tasks as a DPO.

The term “just cause” is used in the German Civil Code to refer to situations where it cannot be reasonably expected for the employment contract to continue as normal, i.e., until the end of the notice period or until the agreed termination date, taking into account all the circumstances of the individual case and weighing the interests of both parties.  This requirement goes beyond the provision in Article 38(3) GDPR, which provides that the DPO “shall not be dismissed or penalized by the controller or the processor for performing his tasks.”

Continue Reading Court of Justice of the EU Clarifies Rules on Data Protection Officers’ Dismissal and Conflicts of Interest

On February 1, the Federal Trade Commission (“FTC”) announced its first-ever enforcement action under its Health Breach Notification Rule (“HBNR”) against digital health platform GoodRx Holdings Inc. (“GoodRx”) for failing to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to third-party advertisers.  According to the proposed order, GoodRx will pay a $1.5 million civil penalty and be prohibited from sharing users’ sensitive health data with third-party advertisers in order to resolve the FTC’s complaint. 

This announcement marks the first instance in which the FTC has sought enforcement under the HBNR, which was promulgated in 2009 under the Health Information Technology for Economic and Clinical Health (“HITECH”) Act, and comes just sixteen months after the FTC published a policy statement expanding its interpretation of who is subject to the HBNR and what triggers the HBNR’s notification requirement.  Below is a discussion of the complaint and proposed order, as well as key takeaways from the case.

The Complaint

As described in the complaint, GoodRx is a digital healthcare platform that advertises, distributes, and sells health-related products and services directly to consumers.  As part of these services, GoodRx collects both personal and health information from its consumers.  According to the complaint, GoodRx “promised its users that it would share their personal information, including their personal health information, with limited third parties and only for limited purposes; that it would restrict third parties’ use of such information; and that it would never share personal health information with advertisers or other third parties.”  The complaint further alleged that GoodRx disclosed its consumers’ personal health information to various third parties, including advertisers, in violation of its own policies.  This personal health information included users’ prescription medications and personal health conditions, personal contact information, and unique advertising and persistent identifiers.

Continue Reading FTC Announces First Enforcement Action Under Health Breach Notification Rule

The Federal Energy Regulatory Commission (“FERC”) issued a final rule (Order No. 887) directing the North American Electric Reliability Corporation (“NERC”) to develop new or modified Reliability Standards that require internal network security monitoring (“INSM”) within Critical Infrastructure Protection (“CIP”) networked environments.  This Order may be of interest to entities that develop, implement, or maintain hardware or software for operational technologies associated with bulk electric systems (“BES”).

The forthcoming standards will only apply to certain high- and medium-impact BES Cyber Systems.  The final rule also requires NERC to conduct a feasibility study for implementing similar standards across all other types of BES Cyber Systems.  NERC must propose the new or modified standards within 15 months of the effective date of the final rule, which is 60 days after the date of publication in the Federal Register.  

Background

According to the FERC news release, the 2020 global supply chain attack involving the SolarWinds Orion software demonstrated how attackers can “bypass all network perimeter-based security controls traditionally used to identify malicious activity and compromise the networks of public and private organizations.”  Thus, FERC determined that current CIP Reliability Standards focus on prevention of unauthorized access at the electronic security perimeter and that CIP-networked environments are thus vulnerable to attacks that bypass perimeter-based security controls.  The new or modified Reliability Standards (“INSM Standards”) are intended to address this gap by requiring responsible entities to employ INSM in certain BES Cyber Systems.  INSM is a subset of network security monitoring that enables continuing visibility over communications between networked devices that are in the so-called “trust zone,” a term which generally describes a discrete and secure computing environment.  For purposes of the rule, the trust zone is any CIP-networked environment.  In addition to continuous visibility, INSM facilitates the detection of malicious and anomalous network activity to identify and prevent attacks in progress.  Examples provided by FERC of tools that may support INSM include anti-malware, intrusion detection systems, intrusion prevention systems, and firewalls.   

Continue Reading FERC Orders Development of New Internal Network Security Monitoring Standards

At the beginning of a new year, we are looking ahead to five key technology trends in the EMEA region that are likely to impact businesses in 2023.

1. Technology Regulations across EMEA

European Union

If 2022 was the year that the EU reached political agreement on a series of landmark legislation regulating the technology sector, 2023 will be the year that some of this legislation starts to bite:

  • The Digital Services Act (DSA): By 17 February 2023, online platforms and online search engines need to publish the number of monthly average users in the EU. Providers that are designated as “very large online platforms” and “very large search engines” will need to start complying with the DSA in 2023, and we may start to see Commission investigations kicking off later in the year too.
  • The Digital Markets Act (DMA): The DMA starts applying from 2 May 2023. By 3 July 2023, gatekeepers need to notify their “core platform services” to the Commission.
  • The Data Governance Act (DGA): The DGA becomes applicable from 24 September 2023.

Also this year, proposals published under the European Data Strategy—such as the Data Act and European Health Data Space—and EU legislation targeting artificial intelligence (AI) systems—including the AI ActAI Liability Directive and revised Product Liability Directive—will continue making their way through the EU’s legislative process. These legislative developments will have a significant impact on the way that businesses ingest, use and share data and develop and deploy AI systems. In addition, the new liability rules will create potentially significant new litigation exposure for software and AI innovators.

Continue Reading Top Five EMEA Technology Trends to Watch in 2023

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Artificial Intelligence

In the last quarter of 2022, the annual National Defense Authorization Act (“NDAA”), which contained AI-related provisions, was enacted into law.  The NDAA creates a pilot program to demonstrate use cases for AI in government. Specifically, the Director of the Office of Management and Budget (“Director of OMB”) must identify four new use cases for the application of AI-enabled systems to support modernization initiatives that require “linking multiple siloed internal and external data sources.” The pilot program is also meant to enable agencies to demonstrate the circumstances under which AI can be used to modernize agency operations and “leverage commercially available artificial intelligence technologies that (i) operate in secure cloud environments that can deploy rapidly without the need to replace operating systems; and (ii) do not require extensive staff or training to build.” Finally, the pilot program prioritizes use cases where AI can drive “agency productivity in predictive supply chain and logistics,” such as predictive food demand and optimized supply, predictive medical supplies and equipment demand, predictive logistics for disaster recovery, preparedness and response.

At the state level, in late 2022, there were also efforts to advance requirements for AI used to make certain types of decisions under comprehensive privacy frameworks.  The Colorado Privacy Act draft rules were updated to clarify the circumstances that require controllers to provide an opt-out right for the use of automated decision-making and requirements for assessments of profiling decisions.  In California, although the California Consumer Privacy Act draft regulations do not yet cover automated decision-making, the California Privacy Protection Agency rules subcommittee provided a sample list of related questions concerning this during its December 16, 2022 board meeting.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022

In a new post on the Inside Tech Media blog, our colleagues discuss the “Quantum Computing Cybersecurity Preparedness Act,” which President Biden signed into law in the final days of 2022.  The Act recognizes that current encryption protocols used by the federal government might one day be vulnerable to compromise as a result of

On December 28, 2022, the Spanish Data Protection Authority (“AEPD”) published a statement on the interplay between its recently approved Spanish code of conduct for the pharmaceutical industry and the European Federation of Pharmaceutical Industries and Associations’ (“EFPIA”) proposal for an EU code of conduct on clinical trials and pharmacovigilance.  The statement relates specifically to