In a new post on the Inside Class Actions blog, our colleagues discuss a new Illinois federal court decision, Gregg v. Cent. Transp. LLC, 2024 WL 4766297, at *3 (N.D. Ill. Nov. 13, 2024), which holds that the state’s recent amendment to its Biometric Information Privacy Act capping
Continue Reading Illinois Federal Court Rules BIPA Single-Violation Amendment Applies RetroactivelyLitigation
U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024
This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”). As noted below, some of these developments provide industry with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Legislative Developments
There continued to be strong bipartisan interest in passing federal legislation related to AI. While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.
- Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks.
- In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV). The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations.
- In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July. Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.
- In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ). The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.
- In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended. Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements. The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
- Senate Homeland Security and Governmental Affairs Committee: In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495). Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
- National Defense Authorization Act for Fiscal Year 2025: In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”). The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA. The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems. The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI. The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.
Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024
California Federal Court Dismisses Complaint Accusing Google of Wiretapping Customer Service Calls
A federal judge in the Northern District of California recently dismissed a class action complaint accusing Google of unlawfully wiretapping calls to Verizon’s customer service center through its customer service product, Cloud Contact Center AI. See Ambriz v. Google, LLC, No. 3:23-cv-05437 (N.D. Cal. June 20, 2024).
Plaintiff Misael…
Continue Reading California Federal Court Dismisses Complaint Accusing Google of Wiretapping Customer Service CallsSupreme Court Receives Filings with Key Implications for Climate Change Tort Suits
The Supreme Court will soon decide whether to hear two cases that could dictate the future of climate change tort suits. Such suits have proliferated in recent years: several dozen active cases assert state tort law claims—like nuisance, trespass, and strict liability—against oil and gas companies for fueling and misleading…
Continue Reading Supreme Court Receives Filings with Key Implications for Climate Change Tort SuitsCourt Denies Class Certification in Antitrust Case Based on Expert’s Reliance on Unsupported Assumptions in Damages Model
The Northern District of Illinois recently denied certification to several proposed classes of purchasers of a seizure drug called Acthar in City of Rockford v. Mallinckrodt ARD, Inc., No. 3:17-cv-50107, 2024 WL 1363544 (Mar. 29, 2024). Class plaintiffs had alleged that defendant Express Scripts, a drug distributor, conspired with…
Continue Reading Court Denies Class Certification in Antitrust Case Based on Expert’s Reliance on Unsupported Assumptions in Damages ModelU.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023
This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues. These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity. As noted below, some of these developments provide companies with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Executive Developments on AI
The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence. The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government. The EO builds on the White House’s prior work surrounding the development of responsible AI. Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools). Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination. The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.
Federal Legislative Activity on AI
Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future. For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.
- Deepfakes and Inauthentic Content: In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted.
- Research: In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI.
- Transparency for Foundational Models: In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies. The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
- Bipartisan Senate Forums: Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter. As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.
Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023
All but One Claim in Pathology Lab Data Breach Class Action Tossed on Motion to Dismiss
Only one claim survived dismissal in a recent putative class action lawsuit alleging that a pathology laboratory failed to safeguard patient data in a cyberattack. See Order Granting Motion to Dismiss in Part, Thai v. Molecular Pathology Laboratory Network, Inc., No. 3:22-CV-315-KAC-DCP (E.D. Tenn. Sep. 29, 2023), ECF 38.…
Continue Reading All but One Claim in Pathology Lab Data Breach Class Action Tossed on Motion to DismissSection 337 Developments at the U.S. International Trade Commission
Practice and Procedure
The ITC’s Recent Sua Sponte Use of 100-Day Expedited Adjudication Procedure
Over the last few years, the International Trade Commission (“ITC” or “Commission”) has developed procedural mechanisms geared toward identifying potentially dispositive issues for early disposition in its investigations. These procedures are meant to give respondents an opportunity to litigate a dispositive issue before committing the resources necessary to litigate an entire Section 337 investigation.
In 2018, the ITC adopted 19 C.F.R. § 210.10(b)(3), which provides that “[t]he Commission may order the administrative law judge to issue an initial determination within 100 days of institution . . . ruling on a potentially dispositive issue as set forth in the notice of investigation.” Although the ITC denies the majority of requests by respondents to use this procedural mechanism, the ITC has ordered its ALJs to use this program in a handful of investigations to decide, among other things, whether the asserted patents claim patent-eligible subject matter, whether a complainant has standing to sue, whether a complainant can prove economic domestic industry, and whether claim or issue preclusion applies.
In a recent complaint filed in Certain Selective Thyroid Hormone Receptor-Beta Agonists, Processes for Manufacturing or Relating to Same, and Products Containing Same, Inv. No. 337-TA-1352, Complainant Viking Therapeutics, Inc. (“Viking”) alleged that respondents had misappropriated trade secrets to create their own drug candidates to compete with Viking’s VK2809 (phase 2) clinical drug candidate. As required by Section 337(a)(1)(A) governing trade secret cases, Viking alleged that the respondents’ unfair acts caused injury and threatened to cause injury going forward to Viking’s domestic industry. Viking’s theory of injury was based on the assumption that Viking’s VK2809 drug candidate and respondents’ ASC41 and ASC43F drug candidates would both receive FDA approval, would both launch into the same market, and would compete with one another. Viking’s complaint stated that its domestic industry product drug candidate, VK2809, will be brought to market in 2028.
Unlike past instances where the ITC employed 100-day proceedings, the Commission took the remarkable step of placing this investigation into a 100-day proceeding sua sponte on the issue of injury, even though no respondent raised the issue of injury as a basis to deny institution or order expedited adjudication. See Notice of Institution (Jan. 20, 2023). Respondents had not even argued that Viking’s injury allegations were deficient in their pre-institution filing. Commissioner Schmidtlein wrote separately to express her disagreement with the majority’s decision to order and expedited proceeding, noting that “these issues [are not] suitable for resolution within 100 days.”Continue Reading Section 337 Developments at the U.S. International Trade Commission
California Court Applies “Substance Over Form,” Allows True Lender Claim to Proceed
May courts look beyond the face of a loan transaction to identify the “true lender”? In a lawsuit filed by California’s financial regulator, a California state court recently answered yes, finding that a fact-intensive inquiry into the “substance” of a loan transaction was necessary to determine who the “true lender”…
Continue Reading California Court Applies “Substance Over Form,” Allows True Lender Claim to ProceedHalf Year Review: Insurance Coverage Litigation (H1 2022)
This half-yearly update on insurance coverage litigation summarises significant insurance coverage cases in the English courts and provides a detailed analysis of the Corbin & King v AXA Insurance UK Plc case, highlighting the key takeaways for policyholders. In the first half of 2022, the English courts have delivered important judgments on a number of critical issues for policyholders, including Covid-19 business interruption insurance, aggregation clauses, insurers’ implied obligation to pay claims within a “reasonable” time, and the effect of lenders’ mortgagee interest insurance policies; some of which are policyholder friendly, some less so.
Significant cases 2022 H1
Corbin & King v AXA Insurance UK Plc [2022] EWHC 409 (Comm): In the most anticipated decision of the last half-year relating to Covid-19 business interruption losses, the English High Court determined in favour of a restaurant business, that a prevention of access clause in its policy was triggered by the Government-mandated lockdowns arising from Covid-19 in 2020 and 2021. Given the importance of this case for policyholders, we analyse the court’s findings in further detail below.
Spire Healthcare Limited v Royal & Sun Alliance Insurance Limited [2022] EWCA Civ 17: This decision is the latest word on the interpretation of “aggregation clauses” in insurance policies that require a policyholder to aggregate similar or related losses into a single claim against the insurer, which is then subject to a liability cap on each claim. The Court of Appeal held that several claims against the policyholder could be aggregated into one claim against the insurer on the basis that there was “one source or original cause” of the policyholder’s loss. As a result, the policyholder’s recovery was limited to £10 million, the policy limit per claim.Continue Reading Half Year Review: Insurance Coverage Litigation (H1 2022)