AI

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft risk assessment regulations.  The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year, at which time it will also consider draft regulations covering “automated decisionmaking technology” (ADMT), cybersecurity audits, and revisions to existing regulations.  Accordingly, the draft risk assessment regulations are subject to change.  Below are the key takeaways:

When a Risk Assessment is Required: The draft regulations would require businesses to conduct a risk assessment before processing consumers’ personal information in a manner that “presents significant risk to consumers’ privacy.”  The draft regulations identify several activities that would present such risk:

  • Selling or sharing personal information;
  • Processing sensitive personal information (except in certain situations involving employees and independent contractors);
  • Using ADMT (1) for a decision that produces legal or similarly significant effects concerning a consumer, (2) to profile a consumer who is acting in their capacity as an employee, independent contractor, job applicant, or student, (3) to profile a consumer while they are in a public place, or (4) for profiling for behavioral advertising; or
  • Processing a consumer’s personal information if the business has actual knowledge the consumer is under 16.

Continue Reading CPPA Releases Draft Risk Assessment Regulations

This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.

Artificial Intelligence

AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023.  Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI.  Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation.  The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here.  There were also a number of AI legislative proposals introduced this quarter.  Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems.  Other proposals focus on mandating disclosures of AI systems.  For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage.  Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”

There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:

  • White House:  The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here.  The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
  • CFPB:  The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
  • FTC:  The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
  • HHS Office of National Coordinator for Health IT:  This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies.  The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI.  Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.

Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

In August 2022, the Chips and Science Act—a massive, $280 billion bill to boost public and private sector investments in critical and emerging technologies—became law.  We followed the bill from the beginning and anticipated significant opportunities for industry to inform and influence the direction of the new law’s programs. 

One such opportunity is available now.  The U.S. Department of Commerce recently published a request for information (RFI) “to inform the planning and design of the Regional Technology and Innovation Hub (Tech Hubs) program.”  The public comment period ends March 16, 2023.

Background

The Chips and Science Act authorized $10 billion for the U.S. Department of Commerce to establish a Regional Technology and Innovation Hub (Tech Hubs) program.  Specifically, Commerce was charged with designating at least 20 Tech Hubs and awarding grants to consortia composed of one or more institutions of higher education, political subdivisions, state governments, and “industry or firms in relevant technology, innovation, or manufacturing sectors” to develop and deploy critical technologies in those hubs.  $500 million has already been made available for the program, and Commerce will administer the program through the Economic Development Administration (EDA).Continue Reading Commerce Seeks Comments on Regional Tech Hubs Program

On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals

Continue Reading Italian Garante Fines Three Hospitals Over Their Use of AI for Risk Stratification Purposes, Establishes That Predictive Medicine Processing Requires the Patient’s Explicit Consent

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Artificial Intelligence

In the last quarter of 2022, the annual National Defense Authorization Act (“NDAA”), which contained AI-related provisions, was enacted into law.  The NDAA creates a pilot program to demonstrate use cases for AI in government. Specifically, the Director of the Office of Management and Budget (“Director of OMB”) must identify four new use cases for the application of AI-enabled systems to support modernization initiatives that require “linking multiple siloed internal and external data sources.” The pilot program is also meant to enable agencies to demonstrate the circumstances under which AI can be used to modernize agency operations and “leverage commercially available artificial intelligence technologies that (i) operate in secure cloud environments that can deploy rapidly without the need to replace operating systems; and (ii) do not require extensive staff or training to build.” Finally, the pilot program prioritizes use cases where AI can drive “agency productivity in predictive supply chain and logistics,” such as predictive food demand and optimized supply, predictive medical supplies and equipment demand, predictive logistics for disaster recovery, preparedness and response.

At the state level, in late 2022, there were also efforts to advance requirements for AI used to make certain types of decisions under comprehensive privacy frameworks.  The Colorado Privacy Act draft rules were updated to clarify the circumstances that require controllers to provide an opt-out right for the use of automated decision-making and requirements for assessments of profiling decisions.  In California, although the California Consumer Privacy Act draft regulations do not yet cover automated decision-making, the California Privacy Protection Agency rules subcommittee provided a sample list of related questions concerning this during its December 16, 2022 board meeting.Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022