Photo of Jayne Ponder

Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

I.  Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.

Transparency Requirements.  The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.”  Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.

Third-Party Risk Assessments.  Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.”  To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties. 

Whistleblower Protections.  Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers.  The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.  SB 53 proposes a significantly narrower approach compared to

Continue Reading California Senator Introduces AI Safety Bill

Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.

  • Consumer Protection.  Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act.  In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general.  They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system.  For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
  • Sector-Specific Automated Decision-makingLawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance.  For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance.  Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General.  Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT.  For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.

Continue Reading Blog Post: State Legislatures Consider New Wave of 2025 AI Legislation

Attorneys General in Oregon and Connecticut issued guidance over the holiday interpreting their authority under their state comprehensive privacy statutes and related authorities.  Specifically, the Oregon Attorney General’s guidance focuses on laws relevant for artificial intelligence (“AI”), and the Connecticut Attorney General’s guidance focuses on opt-out preference signals that go

Continue Reading State Attorneys General Issue Guidance On Privacy & Artificial Intelligence

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.      Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.

Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:

Lower Thresholds for “High-Risk AI.”  Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act.  First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act.  Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and

Continue Reading OMB Issues First Governmentwide AI Policy for Federal Agencies

On February 9, the Third Appellate District of California vacated a trial court’s decision that held that enforcement of the California Privacy Protection Agency’s (“CPPA”) regulations could not commence until one year after the finalized date of the regulations.  As we previously explained, the Superior Court’s order prevented the

Continue Reading California Appeals Court Vacates Enforcement Delay of CPPA Regulations

U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key

Continue Reading Trends in AI:  U.S. State Legislative Developments