With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems.
Colorado
Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law. As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination.
On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.” The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”
The letter proposes “a handful of specific areas” for revision, including:
- Refining SB 205’s definition of AI systems to focus on “the most high-risk systems” in order to align with federal measures and frameworks in states with substantial technology sectors. This goal aligns with the officials’ call for “harmony across any regulatory framework adopted by states” to “limit the burden associated with a multi-state compliance scheme that deters investment and hamstrings small technology firms.” The officials add that they “remain open to delays in the implementation” of the new law “to ensure such harmonization.”
- Narrowing SB 205’s requirements to focus on developers of high-risk systems and avoid regulating “small companies that may deploy AI within third-party software that they use in the ordinary course of business.” This goal addresses concerns of Colorado businesses that the new law could “inadvertently impose prohibitively high costs” on AI deployers.
- Shifting from a “proactive disclosure regime” to a “traditional enforcement regime managed by the Attorney General investigating matters after the fact.” This goal also focuses on protecting Colorado’s small businesses from prohibitively high costs that could deter investment and hamper Colorado’s technology sector.
The process is designed to “complement” the work of the AI Impact Task Force established by HB 1468, which was signed into law on June 6. The Task Force is charged with recommending definitions, requirements, codes, benchmarks, and best practices related to algorithmic discrimination and AI systems. The Task Force includes Attorney General Weiser, whose office is granted rulemaking authority under SB 205.
The letter also follows Governor Polis’s May 17 signing statement, which expressed concerns about the “impact this law may have on an industry that is fueling critical technological advancements” and encouraged Colorado lawmakers to “work closely with stakeholders” to “amend this bill to conform with evidence based findings and recommendations for the regulation of this industry.”
Although it is too early to forecast the outcome of the revision process for SB 205, the goals set out by policymakers could ultimately significantly scale back the law’s disclosure requirements that apply to entities that deploy AI systems. At the same time, Colorado officials have not shown a willingness to ease requirements for AI developers, or to modify requirements that align with approaches taken by other states. In their public letter, the Governor, AG, and legislative leadership have committed to “continued robust stakeholder feedback” throughout the revision process, which should give industry additional opportunities to weigh in on Colorado’s AI regulatory framework before SB 205 takes effect.
California
California lawmakers continue to advance dozens of AI bills that address a range of issues, from deceptive election deepfakes to potential “hazardous capabilities” of the most powerful AI models.
Automated Decision Tools and Algorithmic Discrimination. On May 21, AB 2930 passed the California Assembly on a 50-14-16 vote and was ordered to the Senate. Similar to Colorado’s AI law, AB 2930 would impose impact assessment, notice, and disclosure requirements for developers and deployers of “automated decision tools” used to make “consequential decisions” for consumers. If passed, the bill would come into effect on January 1, 2026, one month prior to the effective date of Colorado’s SB 205.
The Safe & Secure Innovation for Frontier AI Models Act. On May 21, the Safe & Secure Innovation for Frontier AI Models Act (SB 1047) passed the California Senate on a 32-1-7 vote and was ordered to the Assembly. The bill would impose safety testing and incident reporting requirements on AI models that are trained on a quantity of computing power that is greater than 1026 flops and exceeds $100,000,000 in value. SB 1047 would also require developers of covered models to implement various safeguards, including “kill switches” and cybersecurity protections. We previously covered SB 1047 on our blog here.
Provenance, Authenticity and Watermarking Standards. On May 22, AB 3211 passed the California Assembly on a 62-0-18 vote and was ordered to the Senate. The bill would require that generative AI providers ensure that synthetic content produced or significantly modified by their generative AI systems contain “imperceptible and maximally indelible” watermarks. The bill would also require that generative AI providers (1) conduct red-team testing to ensure that watermarks cannot be easily removed, (2) make publicly available “watermark decoders” that allow individuals to assess the provenance of AI-generated content, and (3) report material vulnerabilities or failures in generative AI systems related to the inclusion or removal of watermarks. AB 3211 would also require “large online platforms” to label whether content on their platforms is synthetic or nonsynthetic and to detect and label synthetic content that lacks a watermark. We previously summarized other states’ approaches to regulating synthetic content and generative AI here.
The Defending Democracy from Deepfake Deception Act of 2024. On May 22, the Defending Democracy from Deepfake Deception Act (AB 2655) was passed by the California Assembly on a 56-1-23 vote and ordered to the Senate. This bill would require large online platforms to detect materially deceptive content, including deepfakes and chatbots, on their platforms using state-of-the-art tools. Large online platforms would also be required to block and prevent the posting or sending of materially deceptive content of candidates between 120 days before an election and election day or, if the content depicts elections officials, between 120 days before and 60 days after an election. For materially deceptive content posted outside those time periods or that appears within advertisements or election communications, platforms must detect and label such content as inauthentic, fake, or false.
Given the recent progress on several AI bills, California lawmakers appear to be coalescing around core pillars of a potential comprehensive AI regulatory regime for developers, deployers, and online platforms. Although it is not certain which bills, if any, will pass by the legislature’s scheduled adjournment on August 31, the breadth of pending AI legislation highlights potential key areas of focus for future legislative sessions: algorithmic discrimination, public safety, generative AI tools, and AI-generated election content online.
* * *
Follow our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.