Privacy & Data Security

An Illinois federal court has dismissed a proposed class action alleging X Corp. violated the state’s Biometric Information Privacy Act (“BIPA”) through its use of PhotoDNA software to create “hashes” of images to scan for nudity and related content. The court held that Plaintiff failed to allege that the hashes identified photo subjects and therefore failed to allege that the hashes constituted biometric identifiers. Martell v. X Corp., 2024 WL 3011353, at *4 (N.D. Ill. June 13, 2024).

BIPA prohibits private entities from collecting or capturing “a person’s or a customer’s biometric identifier or biometric information” without first obtaining the subject’s informed consent, among other requirements. 740 ILCS 14/15(b). BIPA defines “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” and defines “biometric information” as any information “based on an individual’s biometric identifier used to identify an individual.” 740 ILCS 14/10.

In dismissing the complaint, the court agreed with X’s arguments that Plaintiff failed to plausibly allege (1) that the PhotoDNA software collects scans of facial geometry and (2) that the hashes identified photo subjects. First, the court rejected Plaintiff’s “conclusory” assertion that the creation of a hash from a photo that includes a person’s face “necessitates” creating a scan of facial geometry, saying, “The fact that PhotoDNA creates a unique hash for each photo does not necessarily imply that it is scanning for an individual’s facial geometry when creating the hash.” Id. at *2. The court distinguished Plaintiff’s allegation from those that withstood dismissal in a different case in which the plaintiff alleged that scans of photos “located her face and zeroed in on its unique contours to create a ‘template’ that maps and records her distinct facial measurements.” Id. at 3 (quoting Rivera v. Google Inc., 238 F. Supp. 3d 1088, 1091 (N.D. Ill. 2017)).Continue Reading Illinois Federal Court Dismisses BIPA Suit Against X, Holding “Biometric Identifiers” Must Identify Individuals

In late December 2023, the Federal Communications Commission (“FCC”) published a Report and Order (“Order”) expanding the scope of the data breach notification rules (“Rules”) applicable to telecommunications carriers and interconnected VoIP (“iVoIP”) providers.  The Order makes several notable changes to the prior rules, including broadening the definitions of a

Continue Reading The FCC Expands Scope of Data Breach Notification Rules

This quarterly update summarizes key federal legislative and regulatory developments in the second quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things, connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in U.S. state legislatures.  To summarize, in the second quarter of 2022, Congress and the Administration focused on addressing algorithmic bias and other AI-related risks and introduced a bipartisan federal privacy bill.

Artificial Intelligence

Federal lawmakers introduced legislation in the second quarter of 2022 aimed at addressing risks in the development and use of AI systems, in particular risks related to algorithmic bias and discrimination.  Senator Michael Bennet (D-CO) introduced the Digital Platform Commission Act of 2022 (S. 4201), which would empower a new federal agency, the Federal Digital Platform Commission, to develop regulations for online platforms that facilitate interactions between consumers, as well as between consumers and entities offering goods and services.  Regulations contemplated by the bill include requirements that algorithms used by online platforms “are fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias.”  Although this bill does not appear to have the support to be passed in this Congress, it is emblematic of the concerns in Congress that might later lead to legislation.

Additionally, the bipartisan American Data Privacy and Protection Act (H.R. 8152), introduced by a group of lawmakers led by Representative Frank Pallone (D-NJ-6), would require “large data holders” (defined as covered entities and service providers with over $250 million in gross annual revenue that collect, process, or transfer the covered data of over five million individuals or the sensitive covered data of over 200,000 individuals) to conduct “algorithm impact assessments” on algorithms that “may cause potential harm to an individual.”  These assessments would be required to provide, among other information, details about the design of the algorithm and the steps the entity is taking to mitigate harms to individuals.  Separately, developers of algorithms would be required to conduct “algorithm design evaluations” that evaluate the design, structure, and inputs of the algorithm.  The American Data Privacy and Protection Act is discussed in further detail in the Data Privacy section below.Continue Reading U.S. AI, IoT, CAV, and Data Privacy Legislative and Regulatory Update – Second Quarter 2022

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Sam Jungyun Choi, Associate in Covington’s Technology Regulatory Group, and Anna Oberschelp, Associate in Covington’s Data Privacy & Cybersecurity Practice Group, discussed global regulatory trends that affect robotics, highlights of which are captured here.  A recording of the forum is available here until May 31, 2022.

Trends on Regulating Artificial Intelligence

            According to the Organization for Economic Cooperation and Development  Artificial Intelligence Policy Observatory (“OECD”), since 2017, at least 60 countries have adopted some form of AI policy, a torrent of government activity that nearly matches the pace of modern AI adoption.  Countries around the world are establishing governmental and intergovernmental strategies and initiatives to guide the development of AI.  These AI initiatives include: (1) AI regulation or policy; (2) AI enablers (e.g., research and public awareness); and (3) financial support (e.g., procurement programs for AI R&D).  The anticipated introduction of AI regulations raises concerns about looming challenges for international cooperation.Continue Reading Robotics Spotlight: Global Regulatory Trends Affecting Robotics