On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies that are resource-intensive to produce and require significant amounts of data and compute to yield capabilities that can power a variety of downstream AI applications.

After analyzing numerous case studies and defining the potential benefits of effective frontier model regulation, the Working Group outlines proposed key principles to inform the development of “evidence-based” legislation that appropriately balances safety and innovation.

  • Transparency Requirements.  The report states that frontier AI policy should prioritize “public-facing transparency requirements to best advance accountability” and promote public trust in AI technology.  The report identifies several “key areas” for frontier model developer transparency requirements, including risks and risk mitigation, cybersecurity practices, pre-deployment assessments of capabilities and risks, downstream impacts, and disclosures regarding how training data is obtained. 
  • Third-Party Risk Assessments.  The report states that third-party risk assessments are “essential” for building a “more complete evidence base on the risks of foundation models” and, when coupled with transparency requirements, can create a “race to the top” in AI safety practices.  To implement third-party risk assessments, the report recommends that policymakers provide “safe harbors” for third-party AI evaluators “analogous to those afforded to third-party cybersecurity testers.”
  • Whistleblower Protections.  The report finds that legal protections against retaliation for employees who report wrongdoing can “play a critical role in surfacing misconduct, identifying systemic risks, and fostering accountability in AI development and deployment,” while noting tradeoffs involved in extending whistleblower protections to contractors or other third parties.  The report suggests that policymakers consider whistleblower protections that “cover a broader range of activities” beyond only legal violations, which “may draw upon notions of ‘good faith’ reporting” in cybersecurity or other domains.
  • Adverse Event Reporting:  Drawing on examples of post-deployment monitoring in other contexts, such as government reporting requirements related to medical and equipment malfunctions, the report describes adverse event reporting as another “critical first step” for “targeted AI regulation.”  The report recommends mandatory adverse event reporting systems that share reports with “relevant agencies with domain-specific regulatory authority and expertise” and focus on a “tightly defined” and periodically updated set of harms.  The report further recommends that policymakers combine mandatory reporting with “voluntary reporting for downstream users.”  The report outlines certain benefits of adverse event reporting, including identifying emerging and unanticipated harms, encouraging proactive measures to mitigate risks, improved coordination between the government and private sector, and reducing the costs of enforcement.  The report also notes likely challenges for adverse event reporting regimes, such as the difficulty of clearly defining “adverse events” or ensuring sufficient government resources for monitoring reports.
  • Scoping:  According to the report, “[w]ell-designed regulation is proportionate.”  The report “cautions against” frontier model regulations that use “thresholds based on developer-level properties” (e.g., employee headcount), which “may inadvertently ignore key players,” while noting that “training compute thresholds” may be the “most attractive option” for policymakers.

The findings of the Working Group, which was convened by Governor Gavin Newsom (D) in September 2024 following his veto of California’s proposed Safe & Secure Innovation for Frontier AI Models Act (SB 1047), could inform lawmakers as they move forward with foundation model legislation in the 2025 legislative session.  On May 28, for example, the California Senate passed SB 53, a foundation model whistleblower bill introduced by State Senator (and SB 1047 co-sponsor) Scott Wiener.  Additionally, on June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that we previously covered here.

*              *              *

We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing…

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing on more than 15 years of experience on Capitol Hill and in private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels and represents businesses in legislative and regulatory matters involving intellectual property, national security, regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and other tech policy issues. He also represents clients facing congressional investigations or inquiries across a range of committees and subject matters.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Analese Bridges Analese Bridges

Analese Bridges is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Advertising and Consumer Protection Practice Groups. She represents and advises clients on a range of cybersecurity, data privacy, and consumer protection issues…

Analese Bridges is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Advertising and Consumer Protection Practice Groups. She represents and advises clients on a range of cybersecurity, data privacy, and consumer protection issues, including cyber and data security incident response and preparedness, cross-border privacy law, government and internal investigations, and regulatory compliance.