On February 1, the Federal Trade Commission (“FTC”) announced its first-ever enforcement action under its Health Breach Notification Rule (“HBNR”) against digital health platform GoodRx Holdings Inc. (“GoodRx”) for failing to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to third-party advertisers.  According to the proposed order, GoodRx will pay a $1.5 million civil penalty and be prohibited from sharing users’ sensitive health data with third-party advertisers in order to resolve the FTC’s complaint. 

This announcement marks the first instance in which the FTC has sought enforcement under the HBNR, which was promulgated in 2009 under the Health Information Technology for Economic and Clinical Health (“HITECH”) Act, and comes just sixteen months after the FTC published a policy statement expanding its interpretation of who is subject to the HBNR and what triggers the HBNR’s notification requirement.  Below is a discussion of the complaint and proposed order, as well as key takeaways from the case.

The Complaint

As described in the complaint, GoodRx is a digital healthcare platform that advertises, distributes, and sells health-related products and services directly to consumers.  As part of these services, GoodRx collects both personal and health information from its consumers.  According to the complaint, GoodRx “promised its users that it would share their personal information, including their personal health information, with limited third parties and only for limited purposes; that it would restrict third parties’ use of such information; and that it would never share personal health information with advertisers or other third parties.”  The complaint further alleged that GoodRx disclosed its consumers’ personal health information to various third parties, including advertisers, in violation of its own policies.  This personal health information included users’ prescription medications and personal health conditions, personal contact information, and unique advertising and persistent identifiers.

Continue Reading FTC Announces First Enforcement Action Under Health Breach Notification Rule

The Federal Energy Regulatory Commission (“FERC”) issued a final rule (Order No. 887) directing the North American Electric Reliability Corporation (“NERC”) to develop new or modified Reliability Standards that require internal network security monitoring (“INSM”) within Critical Infrastructure Protection (“CIP”) networked environments.  This Order may be of interest to entities that develop, implement, or maintain hardware or software for operational technologies associated with bulk electric systems (“BES”).

The forthcoming standards will only apply to certain high- and medium-impact BES Cyber Systems.  The final rule also requires NERC to conduct a feasibility study for implementing similar standards across all other types of BES Cyber Systems.  NERC must propose the new or modified standards within 15 months of the effective date of the final rule, which is 60 days after the date of publication in the Federal Register.  

Background

According to the FERC news release, the 2020 global supply chain attack involving the SolarWinds Orion software demonstrated how attackers can “bypass all network perimeter-based security controls traditionally used to identify malicious activity and compromise the networks of public and private organizations.”  Thus, FERC determined that current CIP Reliability Standards focus on prevention of unauthorized access at the electronic security perimeter and that CIP-networked environments are thus vulnerable to attacks that bypass perimeter-based security controls.  The new or modified Reliability Standards (“INSM Standards”) are intended to address this gap by requiring responsible entities to employ INSM in certain BES Cyber Systems.  INSM is a subset of network security monitoring that enables continuing visibility over communications between networked devices that are in the so-called “trust zone,” a term which generally describes a discrete and secure computing environment.  For purposes of the rule, the trust zone is any CIP-networked environment.  In addition to continuous visibility, INSM facilitates the detection of malicious and anomalous network activity to identify and prevent attacks in progress.  Examples provided by FERC of tools that may support INSM include anti-malware, intrusion detection systems, intrusion prevention systems, and firewalls.   

Continue Reading FERC Orders Development of New Internal Network Security Monitoring Standards

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Artificial Intelligence

In the last quarter of 2022, the annual National Defense Authorization Act (“NDAA”), which contained AI-related provisions, was enacted into law.  The NDAA creates a pilot program to demonstrate use cases for AI in government. Specifically, the Director of the Office of Management and Budget (“Director of OMB”) must identify four new use cases for the application of AI-enabled systems to support modernization initiatives that require “linking multiple siloed internal and external data sources.” The pilot program is also meant to enable agencies to demonstrate the circumstances under which AI can be used to modernize agency operations and “leverage commercially available artificial intelligence technologies that (i) operate in secure cloud environments that can deploy rapidly without the need to replace operating systems; and (ii) do not require extensive staff or training to build.” Finally, the pilot program prioritizes use cases where AI can drive “agency productivity in predictive supply chain and logistics,” such as predictive food demand and optimized supply, predictive medical supplies and equipment demand, predictive logistics for disaster recovery, preparedness and response.

At the state level, in late 2022, there were also efforts to advance requirements for AI used to make certain types of decisions under comprehensive privacy frameworks.  The Colorado Privacy Act draft rules were updated to clarify the circumstances that require controllers to provide an opt-out right for the use of automated decision-making and requirements for assessments of profiling decisions.  In California, although the California Consumer Privacy Act draft regulations do not yet cover automated decision-making, the California Privacy Protection Agency rules subcommittee provided a sample list of related questions concerning this during its December 16, 2022 board meeting.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022

On December 20, 2022, the Federal Trade Commission (“FTC”) announced its issuance of Health Products Compliance Guidance, which updates and replaces its previous 1998 guidance, Dietary Supplements: An Advertising Guide for Industry.  While the FTC notes that the basic content of the guide is largely left unchanged, this guidance expands the scope of the previous guidance beyond dietary supplements to broadly include claims made about all health-related products, such as foods, over-the-counter drugs, devices, health apps, and diagnostic tests.  This updated guidance emphasizes “key compliance points” drawn from the numerous enforcement actions brought by the FTC since 1998, and discusses associated examples related to topics such as claim interpretation, substantiation, and other advertising issues.

Identifying Claims and Interpreting Advertisement Meaning

The updated guidance first discusses how claims are identified and interpreted, including the difference between express and implied claims.  The updated guidance emphasizes that the phrasing and context of an advertisement may imply that the product is beneficial to the treatment of a disease, which in turn would require that the advertiser be able to substantiate the implied claim with competent and reliable scientific evidence, even if the advertisement contains no express reference to the disease.

In addition, the updated guidance provides examples of when advertisers are expected to disclose qualifying information, such as when a product is targeted to a small percentage of the population or contains potentially serious risks.  When the qualifying information is necessary to avoid deception, the updated guidance contains a discussion of what constitutes a clear and conspicuous disclosure of that qualifying information.  Specifically, the guidance states that a disclosure is required to be provided in the same manner as the claim (i.e., if the claim is made visually, the disclosure is required to be made visually).  A visual claim should stand out, and based on its size, contract, location, and length of time is appears, must be easily noticed, read, and understood.  An audible disclosure should be at a volume, speed, and cadence so as to be easily heard and understood.  On social media, the guidance states a disclosure should be “unavoidable,” which the FTC clarifies does not include hyperlinks.  The qualifying information should not include vague qualifying terms, such as that a product “may” have benefits or “helps” achieve a benefit.

Continue Reading FTC Issues New Guidance Regarding Health Products

The Northern District of California recently dismissed with prejudice a putative class action lawsuit against Google, which alleged that the company used a secret program called “Android Lockbox” to spy on Android smartphone users.  See Order Granting Motion to Dismiss, Hammerling v. Google LLC, No. 21-cv-09004-CRB (N.D. Cal. December 1, 2022).  The Court previously

Today, the California Attorney General announced the first settlement agreement under the California Consumer Privacy Act (“CCPA”).  The Attorney General alleged that online retailer Sephora, Inc. failed to disclose to consumers that it was selling their information and failed to process user requests to opt out of sale via user-enabled global privacy controls.  The Attorney

The California Privacy Protection Agency (“CPPA”) announced it will hold a special meeting on July 28, 2022 at 9 a.m. PST to discuss and potentially act on proposed federal privacy legislation, including the bipartisan American Data Protection and Privacy Act (“ADPPA”) (H.R. 8152).  The ADPPA is a comprehensive data privacy bill that advanced through

Recent months have seen a growing trend of data privacy class actions asserting claims for alleged violations of federal and state video privacy laws.  In this year alone, plaintiffs have filed dozens of new class actions in courts across the country asserting claims under the federal Video Privacy Protection Act (“VPPA”), Michigan’s Preservation of Personal

This quarterly update summarizes key federal legislative and regulatory developments in the second quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things, connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in U.S. state legislatures.  To summarize, in the second quarter of 2022, Congress and the Administration focused on addressing algorithmic bias and other AI-related risks and introduced a bipartisan federal privacy bill.

Artificial Intelligence

Federal lawmakers introduced legislation in the second quarter of 2022 aimed at addressing risks in the development and use of AI systems, in particular risks related to algorithmic bias and discrimination.  Senator Michael Bennet (D-CO) introduced the Digital Platform Commission Act of 2022 (S. 4201), which would empower a new federal agency, the Federal Digital Platform Commission, to develop regulations for online platforms that facilitate interactions between consumers, as well as between consumers and entities offering goods and services.  Regulations contemplated by the bill include requirements that algorithms used by online platforms “are fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias.”  Although this bill does not appear to have the support to be passed in this Congress, it is emblematic of the concerns in Congress that might later lead to legislation.

Additionally, the bipartisan American Data Privacy and Protection Act (H.R. 8152), introduced by a group of lawmakers led by Representative Frank Pallone (D-NJ-6), would require “large data holders” (defined as covered entities and service providers with over $250 million in gross annual revenue that collect, process, or transfer the covered data of over five million individuals or the sensitive covered data of over 200,000 individuals) to conduct “algorithm impact assessments” on algorithms that “may cause potential harm to an individual.”  These assessments would be required to provide, among other information, details about the design of the algorithm and the steps the entity is taking to mitigate harms to individuals.  Separately, developers of algorithms would be required to conduct “algorithm design evaluations” that evaluate the design, structure, and inputs of the algorithm.  The American Data Privacy and Protection Act is discussed in further detail in the Data Privacy section below.

Continue Reading U.S. AI, IoT, CAV, and Data Privacy Legislative and Regulatory Update – Second Quarter 2022