Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.

  • Consumer Protection.  Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act.  In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general.  They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system.  For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
  • Sector-Specific Automated Decision-makingLawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance.  For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance.  Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General.  Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT.  For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.
  • Chatbots.  Another key trend in 2025 AI legislation focuses on AI chatbots.  For example, Hawaii HB 639 / SB 640, Idaho HB 127, Illinois HB 3021, Massachusetts SD 2223, and New York A222, would either require chatbot providers to provide prominent disclosures to inform users that they are not interacting with a human or impose liability on chatbot providers for misleading or deceptive chatbot communications.
  • Generative AI Transparency.  State legislatures are also considering legislation to regulate providers of generative AI systems and platforms that host synthetic content.  Some of these bills, such as Washington HB 1170, Florida HB 369, Illinois SB 1929, and New Mexico HB 401 would require generative AI providers to include watermarks in AI-generated outputs and provide free AI detection tools for users, similar to the California AI Transparency Act, which passed last year.  Other bills, such as Illinois SB 1792 and Utah SB 226, would require generative AI owners, licensees, or operators to display notices to users that disclose the use of generative AI or warn users that AI-generated outputs may be inaccurate, inappropriate, or harmful. 
  • AI Data Centers & Energy.  Lawmakers across the country have introduced legislation to address the growing energy demands of AI development and related environmental concerns.  For example, California AB 222 would require data centers to estimate and report to the state the total energy used to develop certain large AI models, and would require covered AI developers to estimate and publish the total energy used to develop each model.  Similarly, Massachusetts HD 4192 would require both AI developers and operators of sources of greenhouse gas emissions to monitor, track, and report environmental impacts and mitigations.
  • Frontier Model Public Safety.  Following the legislature’s passage and Governor’s subsequent veto of California SB 1047 last year, California State Senator Scott Wiener filed SB 53 with the goal of “establish[ing] safeguards for the development of [AI] frontier models.”  Lawmakers in other states are also considering legislation to address public safety risks posed by “frontier” or “foundation” models, generally defined as AI models that meet certain computational or monetary thresholds.  For example, Illinois HB 3506c would require developers of certain large AI models to conduct risk assessments every 90 days, publish annual third-party audits, and implement foundation model safety and security protocols.  As another approach, Rhode Island H 5224 would impose strict liability on developers of covered AI models for all injuries to non-users that are factually and proximately caused by the covered model.

*              *              *

Although the likelihood of passage for these AI bills remains unclear, any state AI legislation that does pass is likely to have significant effects on the U.S. AI regulatory landscape, especially in the absence of federal action on AI.  We will continue to monitor these and related AI developments across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Analese Bridges Analese Bridges

Analese Bridges is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Advertising and Consumer Protection Practice Groups. She represents and advises clients on a range of cybersecurity, data privacy, and consumer protection issues…

Analese Bridges is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Advertising and Consumer Protection Practice Groups. She represents and advises clients on a range of cybersecurity, data privacy, and consumer protection issues, including cyber and data security incident response and preparedness, cross-border privacy law, government and internal investigations, and regulatory compliance.