With three months left until the end of this year’s legislative session, the California Legislature has been considering a flurry of bills regarding artificial intelligence (AI). Notable bills, described further below, impose requirements on developers and deployers of generative AI systems. The bills contain varying definitions of AI and generative AI systems. Each of these bills has been passed by one legislative chamber, but remains under consideration in the other chamber.

Legislation Regulating AI Developers

Two bills would require generative AI systems to make AI-generated content easily identifiable.

  • SB 942 would require generative AI systems that average 1 million monthly visitors or users to provide an “AI detection tool” that would verify whether content was generated by the system. It would also require AI-generated content to have a visible and difficult to remove disclosure that the content was generated by AI. A noncompliant system would incur a daily $5,000 fine, although only the Attorney General could file an enforcement action.
  • AB 3211 would require, starting February 1, 2025, that every generative AI system, as defined under the law, place watermarks in AI-generated content. Generative AI systems would need to develop associated decoders that would verify whether content was generated by the system. A system available before February 1, 2025 could only remain available if the provider of the system created a decoder with 99% accuracy or published research that the system is incapable of producing inauthentic content. A system used in a conversational setting (e.g., chatbots) would need to clearly disclose that it generates synthetic content. Additionally, vulnerabilities in the system would need to be reported to the Department of Technology. The Department of Technology would have administrative enforcement authority to impose penalties up to the greater of $1 million or 5% of the violator’s annual global revenue.

Two additional bills would limit or require disclosure of information about data sources used to train AI models.

  • AB 2013 would require, beginning in January 1, 2026, that the developer of any AI model post on their website information regarding the data used to train the model. This would include: the source or owner of the data; the number of samples in the data; whether the data is protected by copyright, trademark, or patent; and whether there is personal information or aggregate commercial information in the data, as defined in the California Consumer Privacy Act (CCPA). AI models developed solely to ensure security and integrity would be exempt from this requirement.
  • AB 2877 would prohibit using personal information, as defined in the CCPA, of individuals under 16 years old to train an AI model without affirmative consent. The individual’s parent would need to give affirmative consent for individuals under 13 years old. Even with consent, the developer would need to deidentify and aggregate the data before using it to train an AI model.

Legislators also are considering preemptively regulating AI that is more advanced than systems currently in existence. SB 1047 would create a new Frontier Model Division to regulate AI models trained on a system that can perform 1026 integer operations per second (IOPS) or floating-point operations per second (FLOPS). The legislature emphasized this would not regulate any technology currently in existence. The bill would also require operators of a cluster of computers that can perform 1020 IOPS or FLOPS to establish certain policies around customer use of the cluster.

Legislation Regulating AI Deployers

AB 2930 would impose compliance requirements for entities that use AI as a substantial factor in making defined “consequential decisions,” such as employee hiring or pay, educational assessment, access to financial services, and health care decisions. Such entities would need to perform an impact assessment before using the AI tool, which would include an analysis of the information collected and processed by the tool, potential adverse impacts on protected classes, safeguards established to address reasonably foreseeable risks of discrimination against protected classes, and a few additional categories of information. (Developers of such AI tools also would need to perform a similar impact assessment.) These requirements would be effective on January 1, 2025, with a one-year grace period for pre-existing tools. Before using the AI tool to make a consequential decision, the entity would need to inform the individual of the tool’s use. For decisions solely based on the tool’s output, an entity would need to accommodate, if technically feasible, an individual’s request that they not be subject to the tool’s use. Violations of these requirements would be subject to civil actions by the Civil Rights Department, Attorney General, and local prosecutors.

AB 3030 would require doctor’s offices that use AI to generate patient communications to disclose their use of AI to patients. Doctor’s offices that use AI would also be required to provide patients with instructions regarding how they can communicate with a human healthcare provider.

Legislation Targeting Other Entities

AB 1791 would require social media platforms to retain some and redact other “provenance data” embedded in user-uploaded content. The bill defines provenance data as data used to verify the content’s authenticity, origin, or history of modification. Platforms would be required to retain provenance data that identifies the device or service used to generate the content as well as data that proves the content’s authenticity. But platforms would be required to redact provenance data that contains personal information as defined in the CCPA, in addition to other unique data that is reasonably capable of being associated with a particular individual. A violation of these requirements would constitute an unfair business practice.

Finally, AB 2355 would require entities that create, originally publish, or originally distribute a political advertisement including an image, audio, or video generated or substantially altered by AI to provide a disclosure of the AI use. The bill explicitly states that it does not alter any protections granted by Section 230 of the Communications Decency Act. It also grants a few limited exemptions from the law’s substantive provisions, most notably including one for news websites that publish the advertisement as part of news coverage.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection…

Lindsey Tonsager co-chairs the firm’s global Data Privacy and Cybersecurity practice. She advises clients in their strategic and proactive engagement with the Federal Trade Commission, the U.S. Congress, the California Privacy Protection Agency, and state attorneys general on proposed changes to data protection laws, and regularly represents clients in responding to investigations and enforcement actions involving their privacy and information security practices.

Lindsey’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of artificial intelligence, data processing for connected devices, biometrics, online advertising, endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, e-mail marketing, disclosures of video viewing information, and new technologies.

Lindsey also assesses privacy and data security risks in complex corporate transactions where personal data is a critical asset or data processing risks are otherwise material. In light of a dynamic regulatory environment where new state, federal, and international data protection laws are always on the horizon and enforcement priorities are shifting, she focuses on designing risk-based, global privacy programs for clients that can keep pace with evolving legal requirements and efficiently leverage the clients’ existing privacy policies and practices. She conducts data protection assessments to benchmark against legal requirements and industry trends and proposes practical risk mitigation measures.

Photo of Priya Leeds Priya Leeds

Priya Sundaresan Leeds is an associate in the firm’s San Francisco office. She is a member of the Privacy and Cybersecurity Practice Group. She also maintains an active pro bono practice with a focus on gun control and criminal justice.