On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.” The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047). The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.
Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.
Transparency Requirements. The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.” Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.
Third-Party Risk Assessments. Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.” To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties.
Whistleblower Protections. Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers. The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation