As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D).  The AI bills, some of which have already passed both chambers, reflect recent trends in state AI regulation nationwide, including AI consumer protection frameworks, guardrails for the use of AI in employment and healthcare, frontier model safety requirements, and chatbot safeguards. 

AI Consumer Protection.  California lawmakers are advancing several bills that would impose disclosure, testing, documentation, and other governance requirements for AI systems used to make or assist in decisions that impact consumers.  Like 2024’s Colorado AI Act, California’s Automated Decisions Safety Act (AB 1018) would adopt a cross-sector approach, imposing duties and requirements on developers and deployers of “automated decision systems” (“ADS”) used to make or facilitate employment, education, housing, healthcare, or other “consequential decisions” affecting natural persons.  The bill would require ADS developers and deployers to conduct impact assessments and third-party audits and comply with various disclosure and documentation requirements, and would establish consumer notice, correction, and appeal rights. 

Employment and Healthcare.  SB 7 would establish worker notice, access, and correction rights, prohibited uses, and human oversight requirements for employers that use ADS for employment-related decisions.  Other bills would impose similar restrictions on AI used in healthcare contexts.  AB 489, which passed both chambers on September 8, would prohibit representations that indicate that an AI system possesses a healthcare license or can provide professional healthcare advice.

Frontier Model Safety.  Following the 2024 passage—and Governor Newsom’s subsequent veto—of the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), State Senator Scott Wiener (D-San Francisco) has led a renewed push for frontier model safety with his Transparency in Frontier AI Act (SB 53).  SB 53 would require large developers of frontier models to implement and publish a “frontier AI framework” to mitigate potential public safety harms arising from frontier model development, in addition to transparency reports and incident reporting requirements.  Unlike SB 1047, SB 53 would not require developers to implement a “full shutdown” capability for frontier models, conduct third-party audits, or meet a duty of reasonable care to prevent public safety harms.  Moreover, while SB 1047 would have established civil penalties of up to 10 percent of the cost of computing power used to train any developer’s frontier model, SB 53 would establish a uniform penalty of up to $1 million per violation of any of its frontier AI transparency provisions and would only apply to developers with annual revenues above $500 million.  Although its likelihood of passage remains uncertain, SB 53 builds on several recent state efforts to establish frontier model safeguards, including the passage of the Responsible AI Safety & Education (“RAISE”) Act in New York in May and the release of a final report on frontier AI policy by California’s Frontier AI Working Group in June.

Chatbots.  Various other California bills would establish safeguards for individuals, and particularly children, that interact with AI chatbots or generative AI systems.  The Leading Ethical AI Development (“LEAD”) for Kids Act (AB 1064), which passed the Senate on September 10 and could receive a vote in the Assembly as soon as this week, would prohibit individuals or businesses from providing “companion chatbots”—generative AI systems that simulate sustained humanlike relationships through personalization, unprompted questions, and ongoing dialogue with users—to children if the companion chatbot is “foreseeably capable” of engaging in certain activities, including encouraging a child to engage in self-harm, violence, or illegal activity, offering unlicensed mental health therapy to a child, or prioritizing user validation and engagement over child safety, among other prohibited capabilities. Another AI chatbot safety bill, SB 243, passed the Assembly on September 10 and awaits final passage in the Senate.  SB 243 would require companion chatbot operators to issue recurring disclosures to minor users, implement protocols to prevent the generation of content related to suicide or self-harm, and disclose companion chatbot protocols and other information to the state.  

The bills above reflect only some of the AI legislation pending before California lawmakers ahead of their September 12 deadline for passage.  Other AI bills have already passed both chambers and now head to the Governor, including AB 316, which would prohibit AI developers or deployers from asserting that AI “autonomously” caused harm as a legal defense, and California SB 524, which would establish restrictions on the use of AI by law enforcement agencies.  Governor Newsom will have until October 12 to sign or veto these and any other AI bills that reach his desk.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.