Artificial Intelligence (AI)

Ahead of its December 8 board meeting, the California Privacy Protection Agency (CPPA) has issued draft risk assessment regulations.  The CPPA has yet to initiate the formal rulemaking process and has stated that it expects to begin formal rulemaking next year, at which time it will also consider draft regulations covering “automated decisionmaking technology” (ADMT), cybersecurity audits, and revisions to existing regulations.  Accordingly, the draft risk assessment regulations are subject to change.  Below are the key takeaways:

When a Risk Assessment is Required: The draft regulations would require businesses to conduct a risk assessment before processing consumers’ personal information in a manner that “presents significant risk to consumers’ privacy.”  The draft regulations identify several activities that would present such risk:

  • Selling or sharing personal information;
  • Processing sensitive personal information (except in certain situations involving employees and independent contractors);
  • Using ADMT (1) for a decision that produces legal or similarly significant effects concerning a consumer, (2) to profile a consumer who is acting in their capacity as an employee, independent contractor, job applicant, or student, (3) to profile a consumer while they are in a public place, or (4) for profiling for behavioral advertising; or
  • Processing a consumer’s personal information if the business has actual knowledge the consumer is under 16.


Continue Reading CPPA Releases Draft Risk Assessment Regulations

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose

On October 12, 2023 the Italian Data Protection Authority (“Garante”) published guidance on the use of AI in healthcare services (“Guidance”).  The document builds on principles enshrined in the GPDR, national and EU case-law.  Although the Guidance focuses on Italian national healthcare services, it offers considerations relevant to the use of AI in the healthcare

Yesterday, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. 

Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:

  • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.


Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.

Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act  (for a summary of the AI

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in

On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences.  In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI.  “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.”  The Senator seeks public feedback on the white paper by September 22, 2023.  Health care and life sciences stakeholders should consider providing comments. 

This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.


5 Key Takeaways for AI in Health Care and Life Sciences

The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.”  At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize.  Five key takeaways from the white paper include:

Continue Reading Framework for the Future of AI: Senator Cassidy Issues White Paper, Seeks Public Feedback

On September 6, 2023, U.S. Senator Bill Cassidy, ranking member of the Senate Health, Education, Labor and Pensions (HELP) Committee, published a white paper addressing artificial intelligence (AI) and its potential benefits and risks in the workplace, as well as in the health care  context, which we discuss here.

The whitepaper notes that employers

The Federal Election Commission (FEC) officially dipped its toes into the ongoing national debate around artificial intelligence (AI) regulation, publishing a Federal Register notice seeking comment on a petition submitted by Public Citizen to initiate a rulemaking to clarify that the Federal Election Campaign Act (FECA) prohibits deceptive AI-generated campaign advertisements.  The Commission unanimously approved