Artificial Intelligence (AI)

Nearly a year after Senate Majority Leader Chuck Schumer (D-NY) launched the SAFE Innovation Framework for artificial intelligence (AI) with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), the bipartisan group has released a 31-page “Roadmap” for AI policy.  The overarching theme of the Roadmap is “harnessing the full potential of AI while minimizing the risks of AI in the near and long term.”

In contrast to Europe’s approach to regulating AI, the Roadmap does not propose or even contemplate a comprehensive AI law.  Rather, it identifies key themes and areas of agreement and directs the relevant congressional committees of jurisdiction to legislate on key issues.  The Roadmap recommendations are informed by the nine AI Insight Forums that the bipartisan group convened over the last year.

  • Supporting U.S. Innovation in AI.  The Roadmap recommends least $32 billion in funding per year for non-defense AI innovation, and the authors call on the Appropriations Committee to “develop emergency appropriations language to fill the gap between current spending levels and the [National Security Commission on AI (NSCAI)]-recommended level,” suggesting the bipartisan group would like to see Congress increase funding for AI as soon as this year. The funding would cover a host of purposes, such as AI R&D, including AI chip design and manufacture; funding the outstanding CHIPS and Science Act accounts that relate to AI; and AI testing and evaluation at NIST.
    • This pillar also endorses the bipartisan Creating Resources for Every American to Experiment with Artificial Intelligence (CREATE AI) Act (S. 2714), which would broaden nonprofit and academic researchers’ access to AI development resources including computing power, datasets, testbeds, and training through a new National Artificial Intelligence Research Resource.  The Roadmap also supports elements of the Future of AI Innovation Act (S. 4178) related to “grand challenge” funding programs, which aim to accelerate AI development through prize competitions and federal investment initiatives.
    • The bipartisan group recommends including funds for the Department of Defense and DARPA to address national security threats and opportunities in the emergency funding measure.  
  • AI and the Workforce.  The Roadmap recommends committees of jurisdiction consider the impact of AI on U.S. workers and ensure that working Americans benefit from technological progress, including through training programs and by studying the impacts of AI on workers.  Importantly, the bipartisan group recommends legislation to “improve the U.S. immigration system for high-skilled STEM workers.”  The Roadmap does not address benefit programs for displaced workers.

Continue Reading Bipartisan Senate AI Roadmap Released

Although the final text of the EU AI Act should enter into force in the next few months, many of its obligations will only start to apply two or more years after that (for further details, see our earlier blog here). To address this gap, the Commission is encouraging industry to take early, voluntary

In the absence of congressional action on comprehensive artificial intelligence (AI) legislation, state legislatures are forging ahead with groundbreaking bills to regulate the rapidly advancing technology.  On May 8, the Colorado House of Representatives passed SB 205, a far-reaching and comprehensive AI bill, on a 41-22-2 vote.  The final vote comes just days

As the 2024 elections approach and the window for Congress to consider bipartisan comprehensive artificial intelligence (AI) legislation shrinks, California officials are attempting to guard against a generative AI free-for-all—at least with respect to state government use of the rapidly advancing technology—by becoming the largest state to issue rules for state procurement of AI technologies. 

This is part of a series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs  described the actions taken by various government agencies to implement

With the rapid evolution of artificial intelligence (AI) technology, the regulatory frameworks for AI in the Asia–Pacific (APAC) region continue to develop quickly. Policymakers and regulators have been prompted to consider either reviewing existing regulatory frameworks to ensure their effectiveness in addressing emerging risks brought by AI, or proposing new, AI-specific rules or regulations. Overall, there appears to be a trend across the region to promote AI uses and developments, with most jurisdictions focusing on high-level and principle-based guidance. While a few jurisdictions are considering regulations specific to AI, they are still at an early stage. Further, privacy regulators and some industry regulators, such as financial regulators, are starting to play a role in AI governance.

This blog post provides an overview of various approaches in regulating AI and managing AI-related risks in the APAC region.  

  • AI-Specific Laws and Regulations

Several jurisdictions in the region are moving toward AI-specific regulations, including the People’s Republic of China (hereinafter referred to as China), South Korea, and Taiwan.

  • China has been most active in shaping regulations specific to generative AI technologies since 2023. It has taken a multifaceted approach that combines AI-specific regulations, national standards and technical guidance to govern generative AI services and the regulatory focus has been on services that are provided to the public in China. The Interim Administrative Measures for Generative Artificial Intelligence Services represent a milestone as the first comprehensive regulation specifically addressing generative AI services (a summary of this regulation can be found in our previous post here). Several non-binding technical documents and national standards have been issued or are being drafted to further implement this regulation. Prior to the regulation that specifically addresses generative AI services, China had issued regulations for deep synthesis and algorithmic recommendations. Further, China promulgated rules on conducting an ethical review of scientific activities involving generative AI.
  • Beyond a few provisions on narrow aspects scattered in other regimes, South Korea does not presently have a comprehensive AI-specific regulatory framework. Proposed in early 2023, the draft Act on Fostering the AI Industry and Securing Trustworthy AI remains currently pending before the National Assembly. If enacted, it would set out the first comprehensive legislative framework governing the usage of AI in South Korea, generally reflecting an approach that would permit AI usage and developments subject to subsequent safeguards if and as needed. In parallel, the Personal Information Protection Commission (PIPC) has been advocating for a flexible approach to AI based on self-regulation, with support from the PIPC. Furthermore, the Korean Fair Trade Commission (KFTC) will soon start a detailed study to identify potential AI-induced risks in terms of consumer protection as well as unfair or anti-competitive practices, which might result in KFTC-supervised self-regulation of certain AI aspects through industry codes of conduct supplemented by a set of guidelines on AI, or even proposed legislation or amendments to existing consumer protection or antitrust rules. 
  • Similarly, Taiwan is drafting a basic law governing AI, i.e., the Basic Law for Development of Artificial Intelligence, which will set out fundamental principles for AI development and for the government to promote the development of AI technologies. However, it is still uncertain whether and when Taiwan will pass this draft law.
  • Non-binding AI Principles and Guidelines

Continue Reading Overview of AI Regulatory Landscape in APAC

Senate Commerce Committee Chair Maria Cantwell (D-WA) and Senators Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN) recently introduced the Future of AI Innovation Act, a legislative package that addresses key bipartisan priorities to promote AI safety, standardization, and access.  The bill would also advance U.S. leadership in AI by facilitating R&D

On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a

A new post on the Covington Inside Privacy blog discusses remarks by California Privacy Protection Agency (CPPA) Executive Director Ashkan Soltani at the International Association of Privacy Professionals’ global privacy conference last week.  The remarks covered the CPPA’s priorities for rulemaking and administrative enforcement of the California Consumer Privacy Act, including with respect to connected

This is the thirty-fourth in a series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs describes described the actions taken by various government agencies to implement the Cyber EO from June 2021through January 2024.  This blog describes key actions taken to implement the Cyber EO, as well as the U.S. National Cybersecurity Strategy, during February 2024.  It also describes key actions taken during February 2024 to implement President Biden’s Executive Order on Artificial Intelligence (the “AI EO”), particularly its provisions that impact cybersecurity, secure software, and federal government contractors. 

NIST Publishes Cybersecurity Framework 2.0

            On February 26, 2024, the U.S. National Institute of Standards and Technology (“NIST”) published version 2.0 of its Cybersecurity Framework.  The NIST Cybersecurity Framework (“CSF” or “Framework”) provides a taxonomy of high-level cybersecurity outcomes that can be used by any organization, regardless of its size, sector, or relative maturity, to better understand, assess, prioritize, and communicate its cybersecurity efforts.  CSF 2.0 makes some significant changes to the Framework, particularly in the areas of Governance and Cybersecurity Supply Chain Risk Management (“C-SCRM”).  Covington’s Privacy and Cybersecurity group has posted a blog that discusses CSF 2.0 and those changes in greater detail.

NTIA Requests Comment Regarding “Open Weight”

Dual-Use Foundation AI Models

            Also on February 26, the National Telecommunications and Information Administration (“NTIA”) published a request for comments on the risks, benefits, and possible regulation of “dual-use foundation models for which the model weights are widely available.”  Among other questions raised by NTIA in the document are whether the availability of public model weights could pose risks to infrastructure or the defense sector.  NTIA is seeking comments in order to prepare a report that the AI EO requires by July 26, 2024 on the risks and benefits of private companies making the weights of their foundational AI models publicly available.  NTIA’s request for comments notes that “openness” or “wide availability” are terms without clear definition, and that “more information [is] needed to detail the relationship between openness and the wide availability of both model weights and open foundation models more generally.”  NTIA also requests comments on potential regulatory regimes for dual-use foundation models with widely available model weights, as well as the kinds of regulatory structures “that could deal with not only the large scale of these foundation models, but also the declining level of computing resources needed to fine-tune and retrain them.”Continue Reading February 2024 Developments Under President Biden’s Cybersecurity Executive Order, National Cybersecurity Strategy, and AI Executive Order