On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems.  SB 1047’s sweeping AI safety and security regime, which included annual third-party safety audits, shutdown capabilities, detailed safety and security protocols, and incident reporting requirements, would likely have established a de facto national safety standard for large AI models if enacted.  The veto followed rare public calls from Members of California’s congressional delegation—including Speaker Emerita Nancy Pelosi (D-CA) and Representatives Ro Khanna (D-CA), Anna Eschoo (D-CA), Zoe Lofgren (D-CA), and Jay Obernolte (R-CA)—for the governor to reject the bill.

In his veto message, Governor Newsom noted that “[AI] safety protocols must be adopted” with “[p]roactive guardrails” and “severe consequences for bad actors,” but he criticized SB 1047 for regulating based on the “cost and number of computations needed to develop an AI model.” SB 1047 would have defined “covered models” as AI models trained using more than 1026 FLOPS of computational power valued at more than $100 million.  In relying on cost and computing thresholds rather than “the system’s actual risks,” Newsom argued that SB 1047 “applies stringent standards to even the most basic functions–so long as a large system deploys it.”  Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.” 

The veto follows Newsom’s prior statement, on September 17, expressing concerns with the “outsized impact that [SB 1047] could have,” including “the chilling effect, particularly in the open source community” and potential effects on the competitiveness of California’s AI industry.

Echoing the risk-based approaches taken by Colorado’s SB 205—the landmark AI anti-discrimination law passed in May—and California’s AB 2930, a similar automated decision-making bill that failed to pass the state Senate in August, Newsom called for new legislation that would “take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.”

In contrast with Colorado Gov. Jared Polis’s call for a unified federal approach to AI regulation, however, Newsom noted that a “California-only approach” to regulating AI “may well be warranted . . . especially absent federal action by Congress.”  Newsom also pointed to the numerous AI bills “regulating specific, known risks” he signed into law, including laws regulating or prohibiting digital replicas, election deepfakes, and AI-generated CSAM (AB 1831).

While the legislature can override the governor’s veto by a two-thirds vote, it has not taken that step since 1979.  Instead, we expect the legislature to revisit AI safety legislation next year.

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate…

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate, state government, and political campaigns, Matt develops comprehensive policy strategies that identify regulatory risks and position clients to shape policy outcomes.

Public Policy and Regulatory Strategy

Matt serves as a strategic advisor to Fortune 200 companies on emerging technology policy, including artificial intelligence regulation, connected and autonomous vehicles, semiconductors, IoT, and national security matters. He translates complex legal and technical issues into actionable legislative and regulatory strategy, building the policy frameworks and advocacy infrastructure that enable clients to influence policy. He develops policy collateral for federal, state, and international advocacy, coordinates multi-stakeholder coalitions, and represents clients before Congress, federal agencies, and state legislative and regulatory bodies.

His technology policy experience includes securing unprecedented Presidential intervention in the $118 billion Qualcomm-Broadcom transaction (for which Covington was recognized as The American Lawyer 2019 “Dealmakers of the Year”), advising Fortune 200 companies on Bureau of Industry and Security connected vehicle rules, and counseling major internet platforms on autonomous vehicle policy across dozens of states.

Matt leads Covington’s state public policy practice, managing complex multistate legislative and regulatory advocacy campaigns. His state-level work includes securing a last-minute amendment to California’s 2023 money transmitter legislation on behalf of a fintech client and representing major technology companies on state AI, autonomous vehicle, and political advertising compliance matters across dozens of jurisdictions.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration under Chairwoman Amy Klobuchar (D-MN), where he negotiated the landmark bipartisan Electoral Count Reform Act – legislation that updated presidential election certification procedures for the first time in nearly 140 years. He also oversaw the Committee’s bipartisan January 6th investigation, developing protocols that resulted in unanimous passage of new Capitol security legislation.

Both in Congress and at Covington, Matt has prepared dozens of corporate executives, nonprofit leaders, academics, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter and strategist who has composed dozens of bills and amendments introduced in Congress and state legislatures, including many that have been enacted into law.

Election and Political Law Compliance and Enforcement

As a member of Covington’s Chambers-ranked (Band 1) Election and Political Law practice, Matt advises businesses, nonprofits, political committees, candidates, and donors on the full range of federal and state political law compliance matters, including:

Election and campaign finance laws
Lobbying disclosure
Government ethics rules
The SEC Pay-to-Play Rule

He also conducts political law due diligence for M&A transactions, counsels major political funders and donors in compliance and enforcement matters, and represents candidates, ballot measure committees, and donors in election disputes and recounts.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.