Artificial Intelligence (AI)

In late August, the California legislature passed two bills that would limit the creation or use of “digital replicas,” making California the latest state to seek new protections for performers, artists, and other employees in response to the rise of AI-generated content.  These state efforts come as Congress considers the

Continue Reading California Passes Digital Replica Legislation as Congress Considers Federal Approach

On July 30, 2024, the European Commission announced the launch of a consultation on trustworthy general-purpose artificial intelligence (“GPAI”) models and an invitation to stakeholders to express their interest in participating in the drawing up of the first GPAI Code of Practice (the “Code”) under the newly passed EU AI Act (see our previous blog here). Once finalized, GPAI model providers will be able to voluntarily rely on the Code to demonstrate their compliance with certain obligations in the AI Act.

Consultation

The consultation provides stakeholders with the opportunity to have their say on topics that will be covered by the Code. It will also inform the AI Office’s development of the template summary of training material that GPAI model providers will be required to publish under Article 53(1)(d) of the AI Act.

The consultation covers three topics:

  1. Transparency and copyright: This relates to the documentation and policies that providers of GPAI models should have in place to comply with EU copyright law. Part D relates to the content and level of granularity expected from the template summary of training material referenced above.
  2. GPAI models with systemic risk: This relates to how the systemic risks associated with certain GPAI models should be classified, identified and assessed, mitigated, and internally governed (through policies and procedures).
  3. Reviewing and monitoring the GPAI Code of Practice: This relates to how the AI Office will encourage and facilitate the review and adaptation of the Code after its initial implementation.

Interested parties can submit their responses to the consultation via an online form by September 18, 2024. They also have the option to share additional information with the AI Office by filling out the template document featured at the end of the questionnaire.Continue Reading European Commission Launches Consultation and Call for Expression of Interest on GPAI Code of Practice

With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”).  On July 26, the White House announced a series of federal agency actions under the EO

Continue Reading Federal Agencies Continue Implementation of AI Executive Order

On Wednesday, August 7, the Federal Communications Commission (FCC) approved a Notice of Proposed Rulemaking (NPRM) that would amend its rules under the Telephone Consumer Protection Act (TCPA) to incorporate new consent and disclosure requirements for the transmission of AI-generated calls and texts. The NPRM builds off the FCC’s recent

Continue Reading FCC Proposes New Consent and Disclosure Rules for AI-Generated Calls and Texts

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

On Thursday, July 25, the Federal Communications Commission (FCC) released a Notice of Proposed Rulemaking (NPRM) proposing new requirements for radio and television broadcasters and certain other licensees that air political ads containing content created using artificial intelligence (AI).  The NPRM was approved on a 3-2 party-line vote and comes in the wake of an announcement made by FCC Chairwoman Jessica Rosenworcel earlier this summer about the need for such requirements, which we discussed here

At the core of the NPRM are two proposed requirements.  First, parties subject to the rules would have to announce on-air that a political ad (whether a candidate-sponsored ad or an “issue ad” purchased by a political action committee) was created using AI.  Second, those parties would have to include a note in their online political files for political ads containing AI-generated content disclosing the use of such content.  Additional key features of the NPRM are described below.Continue Reading FCC Proposes Labeling and Disclosure Rules for AI-Generated Content in Political Ads

On July 9, 2024, the FTC and California Attorney General settled a case against NGL Labs (“NGL”) and two of its co-founders. NGL Labs’ app, “NGL: ask me anything,” allows users to receive anonymous messages from their friends and social media followers. The complaint alleged violations of the FTC Act

Continue Reading FTC Reaches Settlement with NGL Labs Over Children’s Privacy & AI

By Madelaine Harrington & Marty Hansen on July 17, 2024

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on

Continue Reading EU Artificial Intelligence Act Published

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”

The letter proposes “a handful of specific areas” for revision, including:

  • Refining SB 205’s definition of AI systems to focus on “the most high-risk systems” in order to align with federal measures and frameworks in states with substantial technology sectors.  This goal aligns with the officials’ call for “harmony across any regulatory framework adopted by states” to “limit the burden associated with a multi-state compliance scheme that deters investment and hamstrings small technology firms.”  The officials add that they “remain open to delays in the implementation” of the new law “to ensure such harmonization.”  
  • Narrowing SB 205’s requirements to focus on developers of high-risk systems and avoid regulating “small companies that may deploy AI within third-party software that they use in the ordinary course of business.”  This goal addresses concerns of Colorado businesses that the new law could “inadvertently impose prohibitively high costs” on AI deployers.
  • Shifting from a “proactive disclosure regime” to a “traditional enforcement regime managed by the Attorney General investigating matters after the fact.”  This goal also focuses on protecting Colorado’s small businesses from prohibitively high costs that could deter investment and hamper Colorado’s technology sector.

Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane