U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level. Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI. This blog post summarizes key themes in state AI bills introduced in the past year. Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.
- Notice Requirements: A number of state AI bills focus on notice to individuals. Some bills would require covered entities to notify individuals when using automated decision-making tools for decisions that affect their rights and opportunities, such as the use of AI in employment. For example, the District of Columbia’s “Stop Discrimination by Algorithms Act” (B 114) would require a notice about how the covered entity uses personal information in algorithmic eligibility determinations, including providing information about the source of information, and it would require a separate notice to an individual affected by an algorithmic eligibility determination that results in an “adverse action.” Similarly, the Massachusetts “Act Preventing a Dystopian Work Environment” (HB 1873) likewise would require employers or vendors using an automated decision system to provide notice to workers prior to adopting the system and would require an additional notice if there are “significant updates or changes” made to the system. Additionally, other AI bills have focused on disclosure requirements between entities in the AI ecosystem. For example, Washington’s legislature is considering a bill (HB 1951) that would require developers of automated decision tools to provide documentation of the “known limitations” of the tool, the types of data used to program or train the tool, and how the tool was evaluated for validity to deployers of the tool.
- Impact Assessments: Another key theme in state AI bills focuses on requirements for impact assessments in the development of AI tools; calls for these assessments aim to mitigate potential discrimination, privacy, and accuracy harms. For example, a Vermont bill (HB 114) would require employers using automated decision-making tools to conduct algorithmic impact assessments prior to using those tools for employment-related decisions. Additionally, the bill mentioned above under consideration in the Washington legislature (HB 1951) would require that deployers complete impact assessments for automated decision tools that include, for example, assessments of reasonably foreseeable risks of algorithmic decision making and the safeguards implemented.
- Individual Rights: State legislatures also have sought to implement requirements for consumers to exercise certain rights in AI bills. For example, several state AI bills would establish an individual right to opt-out of decisions based on automated decision-making or request a human reevaluation of such decisions. California (AB 331) and New York (AB 7859) are considering bills that would require AI deployers to allow individuals to request “alternative selection processes” where an automated decision tool is being used to make, or is a controlling factor in, a consequential decision. Similarly, New York’s AI Bill of Rights (S 8209) would provide individuals with the right to opt-out of the use of automated systems in favor of a human alternative.
- Licensing & Registration Regimes: A handful of state legislatures have proposed requirements for AI licensing and registration. For example, New York’s Advanced AI Licensing Act (A 8195) would require all developers and operators of certain “high-risk advanced AI systems” to apply for a license from the state before use. Other bills require registration for certain uses of the AI system. For instance, an amendment introduced in the Illinois legislature (HB 1002) would require state certification of diagnostic algorithms used by hospitals.
- Generative AI & Content Labeling: Another prominent theme in state AI legislation has been a focus on labeling content produced by generative AI systems. For example, Rhode Island is considering a bill (H 6286) that would require a “distinctive watermark” to authenticate generative AI content.