On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems.  SB 1047’s sweeping AI safety and security regime, which included annual third-party safety audits, shutdown capabilities, detailed safety and security protocols, and incident reporting requirements, would likely have established a de facto national safety standard for large AI models if enacted.  The veto followed rare public calls from Members of California’s congressional delegation—including Speaker Emerita Nancy Pelosi (D-CA) and Representatives Ro Khanna (D-CA), Anna Eschoo (D-CA), Zoe Lofgren (D-CA), and Jay Obernolte (R-CA)—for the governor to reject the bill.

In his veto message, Governor Newsom noted that “[AI] safety protocols must be adopted” with “[p]roactive guardrails” and “severe consequences for bad actors,” but he criticized SB 1047 for regulating based on the “cost and number of computations needed to develop an AI model.” SB 1047 would have defined “covered models” as AI models trained using more than 1026 FLOPS of computational power valued at more than $100 million.  In relying on cost and computing thresholds rather than “the system’s actual risks,” Newsom argued that SB 1047 “applies stringent standards to even the most basic functions–so long as a large system deploys it.”  Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.” 

The veto follows Newsom’s prior statement, on September 17, expressing concerns with the “outsized impact that [SB 1047] could have,” including “the chilling effect, particularly in the open source community” and potential effects on the competitiveness of California’s AI industry.

Echoing the risk-based approaches taken by Colorado’s SB 205—the landmark AI anti-discrimination law passed in May—and California’s AB 2930, a similar automated decision-making bill that failed to pass the state Senate in August, Newsom called for new legislation that would “take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.”

In contrast with Colorado Gov. Jared Polis’s call for a unified federal approach to AI regulation, however, Newsom noted that a “California-only approach” to regulating AI “may well be warranted . . . especially absent federal action by Congress.”  Newsom also pointed to the numerous AI bills “regulating specific, known risks” he signed into law, including laws regulating or prohibiting digital replicas, election deepfakes, and AI-generated CSAM (AB 1831).

While the legislature can override the governor’s veto by a two-thirds vote, it has not taken that step since 1979.  Instead, we expect the legislature to revisit AI safety legislation next year.

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.