On February 9, the Third Appellate District of California vacated a trial court’s decision that held that enforcement of the California Privacy Protection Agency’s (“CPPA”) regulations could not commence until one year after the finalized date of the regulations. As we previously explained, the Superior Court’s order prevented the CPPA from enforcing the regulations
Jayne Ponder is an associate in the firm's Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.
U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level. Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI. This blog post summarizes key themes in state AI bills…
This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues. These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity. As noted below, some of these developments provide companies with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Executive Developments on AI
The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence. The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government. The EO builds on the White House’s prior work surrounding the development of responsible AI. Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools). Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination. The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.
Federal Legislative Activity on AI
Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future. For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.
- Deepfakes and Inauthentic Content: In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted.
- Research: In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI.
- Transparency for Foundational Models: In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies. The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
- Bipartisan Senate Forums: Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter. As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.
The Federal Trade Commission’s (“FTC”) Office of Technology announced that it will hold a half-day virtual “FTC Tech Summit” on January 25, 2024 to address key developments in the field of artificial intelligence (“AI”).
The FTC’s event website notes that the Summit will “bring together a diverse set of perspectives across academia, industry, civil society…
Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft risk assessment regulations. The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year, at which time it will also consider draft regulations covering “automated decisionmaking technology” (ADMT), cybersecurity audits, and revisions to existing regulations. Accordingly, the draft risk assessment regulations are subject to change. Below are the key takeaways:
When a Risk Assessment is Required: The draft regulations would require businesses to conduct a risk assessment before processing consumers’ personal information in a manner that “presents significant risk to consumers’ privacy.” The draft regulations identify several activities that would present such risk:
- Selling or sharing personal information;
- Processing sensitive personal information (except in certain situations involving employees and independent contractors);
- Using ADMT (1) for a decision that produces legal or similarly significant effects concerning a consumer, (2) to profile a consumer who is acting in their capacity as an employee, independent contractor, job applicant, or student, (3) to profile a consumer while they are in a public place, or (4) for profiling for behavioral advertising; or
- Processing a consumer’s personal information if the business has actual knowledge the consumer is under 16.
The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.
The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.
Future of AI Policy in the U.S.
U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream. AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.
Over the past year, AI issues have drawn bipartisan interest and support. House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress. Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation. Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees.
Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law. The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies.
Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge. No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures. In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.
I. Major Policy & Regulatory Initiatives
Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House. We preview these proposals below.
A. SAFE Innovation: Values-Based Framework and New Legislative Process
In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence. Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments
This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.
AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023. Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI. Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation. The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here. There were also a number of AI legislative proposals introduced this quarter. Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems. Other proposals focus on mandating disclosures of AI systems. For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage. Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”
There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:
- White House: The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here. The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
- CFPB: The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
- FTC: The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
- HHS Office of National Coordinator for Health IT: This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies. The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI. Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.
On June 22, 2023, the Oregon state legislature passed the Oregon Consumer Privacy Act, S.B. 619 (the “Act”). This bill resembles the comprehensive privacy statutes in Colorado, Montana, and Connecticut, though there are some notable distinctions. If passed, Oregon will be the twelfth state to implement a comprehensive privacy statute, joining California, Virginia, Colorado, Connecticut…
On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems.
The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI. Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use.
The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI. Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities. For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document. Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.
After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.
- Data and Datasets: The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets. The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
- Model Opacity and Access: The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
- Design and Use: The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.
We will continue to monitor these and related developments across our blogs.Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI
On March 28, Governor Kim Reynolds signed into law SF 262, making Iowa the sixth state to enact a comprehensive consumer privacy law. The new law will take effect on January 1, 2025.
As we discuss here, Iowa’s privacy law shares a number of key similarities to existing state privacy frameworks, including providing…