Photo of Libbie Canter

Libbie Canter

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state privacy laws, including the California Consumer Privacy Act and California Privacy Rights Act.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations.

On October 10, 2023, California Governor Gavin Newsom signed S.B. 362, the Delete Act (the “Act”), into law.  The new law represents a substantive overhaul of California’s existing data broker statute, which requires data brokers to register with the California Attorney General annually.  The passage of the Act follows a renewed interest in data

On June 22, 2023, the Oregon state legislature passed the Oregon Consumer Privacy Act, S.B. 619 (the “Act”).  This bill resembles the comprehensive privacy statutes in Colorado, Montana, and Connecticut, though there are some notable distinctions.  If passed, Oregon will be the twelfth state to implement a comprehensive privacy statute, joining California, Virginia, Colorado, Connecticut

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.

Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

By Libbie CanterAnna D. KrausOlivia VegaElizabeth Brim & Jorge Ortiz on April 14, 2023

On April 11, the U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) announced that four Notifications of Enforcement Discretion (“Notifications”) that were issued under the Health Insurance Portability and Accountability Act

On February 1, the Federal Trade Commission (“FTC”) announced its first-ever enforcement action under its Health Breach Notification Rule (“HBNR”) against digital health platform GoodRx Holdings Inc. (“GoodRx”) for failing to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to third-party advertisers.  According to the proposed order, GoodRx will pay a $1.5 million civil penalty and be prohibited from sharing users’ sensitive health data with third-party advertisers in order to resolve the FTC’s complaint. 

This announcement marks the first instance in which the FTC has sought enforcement under the HBNR, which was promulgated in 2009 under the Health Information Technology for Economic and Clinical Health (“HITECH”) Act, and comes just sixteen months after the FTC published a policy statement expanding its interpretation of who is subject to the HBNR and what triggers the HBNR’s notification requirement.  Below is a discussion of the complaint and proposed order, as well as key takeaways from the case.

The Complaint

As described in the complaint, GoodRx is a digital healthcare platform that advertises, distributes, and sells health-related products and services directly to consumers.  As part of these services, GoodRx collects both personal and health information from its consumers.  According to the complaint, GoodRx “promised its users that it would share their personal information, including their personal health information, with limited third parties and only for limited purposes; that it would restrict third parties’ use of such information; and that it would never share personal health information with advertisers or other third parties.”  The complaint further alleged that GoodRx disclosed its consumers’ personal health information to various third parties, including advertisers, in violation of its own policies.  This personal health information included users’ prescription medications and personal health conditions, personal contact information, and unique advertising and persistent identifiers.

Continue Reading FTC Announces First Enforcement Action Under Health Breach Notification Rule

On December 20, 2022, the Federal Trade Commission (“FTC”) announced its issuance of Health Products Compliance Guidance, which updates and replaces its previous 1998 guidance, Dietary Supplements: An Advertising Guide for Industry.  While the FTC notes that the basic content of the guide is largely left unchanged, this guidance expands the scope of the previous guidance beyond dietary supplements to broadly include claims made about all health-related products, such as foods, over-the-counter drugs, devices, health apps, and diagnostic tests.  This updated guidance emphasizes “key compliance points” drawn from the numerous enforcement actions brought by the FTC since 1998, and discusses associated examples related to topics such as claim interpretation, substantiation, and other advertising issues.

Identifying Claims and Interpreting Advertisement Meaning

The updated guidance first discusses how claims are identified and interpreted, including the difference between express and implied claims.  The updated guidance emphasizes that the phrasing and context of an advertisement may imply that the product is beneficial to the treatment of a disease, which in turn would require that the advertiser be able to substantiate the implied claim with competent and reliable scientific evidence, even if the advertisement contains no express reference to the disease.

In addition, the updated guidance provides examples of when advertisers are expected to disclose qualifying information, such as when a product is targeted to a small percentage of the population or contains potentially serious risks.  When the qualifying information is necessary to avoid deception, the updated guidance contains a discussion of what constitutes a clear and conspicuous disclosure of that qualifying information.  Specifically, the guidance states that a disclosure is required to be provided in the same manner as the claim (i.e., if the claim is made visually, the disclosure is required to be made visually).  A visual claim should stand out, and based on its size, contract, location, and length of time is appears, must be easily noticed, read, and understood.  An audible disclosure should be at a volume, speed, and cadence so as to be easily heard and understood.  On social media, the guidance states a disclosure should be “unavoidable,” which the FTC clarifies does not include hyperlinks.  The qualifying information should not include vague qualifying terms, such as that a product “may” have benefits or “helps” achieve a benefit.

Continue Reading FTC Issues New Guidance Regarding Health Products

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer

Today, the California Attorney General announced the first settlement agreement under the California Consumer Privacy Act (“CCPA”).  The Attorney General alleged that online retailer Sephora, Inc. failed to disclose to consumers that it was selling their information and failed to process user requests to opt out of sale via user-enabled global privacy controls.  The Attorney