On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies that are resource-intensive to produce and require significant amounts of data and compute to yield capabilities that can power a variety of downstream AI applications.
After analyzing numerous case studies and defining the potential benefits of effective frontier model regulation, the Working Group outlines proposed key principles to inform the development of “evidence-based” legislation that appropriately balances safety and innovation.
- Transparency Requirements. The report states that frontier AI policy should prioritize “public-facing transparency requirements to best advance accountability” and promote public trust in AI technology. The report identifies several “key areas” for frontier model developer transparency requirements, including risks and risk mitigation, cybersecurity practices, pre-deployment assessments of capabilities and risks, downstream impacts, and disclosures regarding how training data is obtained.
- Third-Party Risk Assessments. The report states that third-party risk assessments are “essential” for building a “more complete evidence base on the risks of foundation models” and, when coupled with transparency requirements, can create a “race to the top” in AI safety practices. To implement third-party risk assessments, the report recommends that policymakers provide “safe harbors” for third-party AI evaluators “analogous to those afforded to third-party cybersecurity testers.”
- Whistleblower Protections. The report finds that legal protections against retaliation for employees who report wrongdoing can “play a critical role in surfacing misconduct, identifying systemic risks, and fostering accountability in AI development and deployment,” while noting tradeoffs involved in extending whistleblower protections to contractors or other third parties. The report suggests that policymakers consider whistleblower protections that “cover a broader range of activities” beyond only legal violations, which “may draw upon notions of ‘good faith’ reporting” in cybersecurity or other domains.
- Adverse Event Reporting: Drawing on examples of post-deployment monitoring in other contexts, such as government reporting requirements related to medical and equipment malfunctions, the report describes adverse event reporting as another “critical first step” for “targeted AI regulation.” The report recommends mandatory adverse event reporting systems that share reports with “relevant agencies with domain-specific regulatory authority and expertise” and focus on a “tightly defined” and periodically updated set of harms. The report further recommends that policymakers combine mandatory reporting with “voluntary reporting for downstream users.” The report outlines certain benefits of adverse event reporting, including identifying emerging and unanticipated harms, encouraging proactive measures to mitigate risks, improved coordination between the government and private sector, and reducing the costs of enforcement. The report also notes likely challenges for adverse event reporting regimes, such as the difficulty of clearly defining “adverse events” or ensuring sufficient government resources for monitoring reports.
- Scoping: According to the report, “[w]ell-designed regulation is proportionate.” The report “cautions against” frontier model regulations that use “thresholds based on developer-level properties” (e.g., employee headcount), which “may inadvertently ignore key players,” while noting that “training compute thresholds” may be the “most attractive option” for policymakers.
The findings of the Working Group, which was convened by Governor Gavin Newsom (D) in September 2024 following his veto of California’s proposed Safe & Secure Innovation for Frontier AI Models Act (SB 1047), could inform lawmakers as they move forward with foundation model legislation in the 2025 legislative session. On May 28, for example, the California Senate passed SB 53, a foundation model whistleblower bill introduced by State Senator (and SB 1047 co-sponsor) Scott Wiener. Additionally, on June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that we previously covered here.
* * *
We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.