Recently, California Governor Gavin Newsom signed into law several privacy and related proposals, including new laws governing browser opt-out preference signals, social media account deletion, data brokers, reproductive and health services, age signals for app stores, social media “black box warning” labels for minors, and companion chatbots. This blog summarizes the statutes’ key takeaways.Continue Reading California Enacts New Privacy Laws
Jayne Ponder
Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.
Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.
Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.
As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.
Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.
California Privacy Agency Fines Tractor Supply $1.35 Million Over CCPA Violations
On September 30, 2025, the California Privacy Protection Agency (“Agency”) announced a decision and $1.35 million fine to resolve allegations that Tractor Supply Co. (“Tractor Supply”) violated the California Consumer Privacy Act (“CCPA”). The settlement comes after the Agency filed a petition to enforce an investigative subpoena against Tractor Supply. In addition to imposing the Agency’s largest fine to date, the settlement also marks the Agency’s first enforcement action related to job applicant personal data. Similar to the enforcement actions against American Honda Motor Co., Inc. and Todd Snyder, Inc., the Agency continues to focus on how businesses facilitate consumer rights under the CCPA.Continue Reading California Privacy Agency Fines Tractor Supply $1.35 Million Over CCPA Violations
Oregon DOJ Publishes Enforcement Report on the Oregon Consumer Privacy Act
On August 29, the Oregon Department of Justice (DOJ) issued an enforcement report and press release covering its first year of enforcement of the Oregon Consumer Privacy Act (OCPA). The OCPA took effect on July 1, 2024, and the cure period sunsets on January 1, 2026. We previously summarized some of requirements in the OCPA here. This blog summarizes notable takeaways from the enforcement report.Continue Reading Oregon DOJ Publishes Enforcement Report on the Oregon Consumer Privacy Act
U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update
This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.
I. Federal AI Legislative Developments
In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms. Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law. The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI. Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included. See our blog post on this topic for more information.
Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery. For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI. See our blog post on this topic for more information. Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update
NIST Welcomes Comments for AI Standards Zero Drafts Project
On July 29, 2025, the National Institute of Standards & Technology (“NIST”) unveiled an outline for preliminary, stakeholder-driven standards, known as a “zero draft”, for AI testing, evaluation, verification and validation (“TEVV”). This outline is part of NIST’s AI Standards Zero Drafts pilot project, which was announced on March 25, 2025, as we previously reported. The goal is to create a flexible, high-level framework for companies to design their own AI testing and validation procedures. Of note, NIST is not prescribing exact methods for testing and validation. Instead, it offers a structure around key terms, lifecycle stages, and guiding principles that align with future international standards. NIST has asked for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, and feedback is open until September 12, 2025.
The NIST outline breaks AI TEVV into several foundational elements, a non-exhaustive list of which includes:Continue Reading NIST Welcomes Comments for AI Standards Zero Drafts Project
Texas Enacts AI Consumer Protection Law
On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law. The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado…
Continue Reading Texas Enacts AI Consumer Protection LawCalifornia Frontier AI Working Group Issues Final Report on Frontier Model Regulation
On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. The report describes “frontier models” as the “most capable” subset of foundation models, or…
Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model RegulationNew York Legislature Passes Sweeping AI Safety Legislation
On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models. If signed into law by Governor Kathy Hochul…
Continue Reading New York Legislature Passes Sweeping AI Safety LegislationU.S. Tech Legislative & Regulatory Update – First Quarter 2025
This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain.
I. Artificial Intelligence
I. Federal Legislative Developments
In the first quarter, members of Congress introduced several AI bills addressing…
Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025California Frontier AI Working Group Issues Report on Foundation Model Regulation
On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.” The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047). The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.
Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.
Transparency Requirements. The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.” Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.
Third-Party Risk Assessments. Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.” To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties.
Whistleblower Protections. Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers. The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation