Regulatory

This is the thirty-fourth in a series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs describes described the actions taken by various government agencies to implement the Cyber EO from June 2021through January 2024.  This blog describes key actions taken to implement the Cyber EO, as well as the U.S. National Cybersecurity Strategy, during February 2024.  It also describes key actions taken during February 2024 to implement President Biden’s Executive Order on Artificial Intelligence (the “AI EO”), particularly its provisions that impact cybersecurity, secure software, and federal government contractors. 

NIST Publishes Cybersecurity Framework 2.0

            On February 26, 2024, the U.S. National Institute of Standards and Technology (“NIST”) published version 2.0 of its Cybersecurity Framework.  The NIST Cybersecurity Framework (“CSF” or “Framework”) provides a taxonomy of high-level cybersecurity outcomes that can be used by any organization, regardless of its size, sector, or relative maturity, to better understand, assess, prioritize, and communicate its cybersecurity efforts.  CSF 2.0 makes some significant changes to the Framework, particularly in the areas of Governance and Cybersecurity Supply Chain Risk Management (“C-SCRM”).  Covington’s Privacy and Cybersecurity group has posted a blog that discusses CSF 2.0 and those changes in greater detail.

NTIA Requests Comment Regarding “Open Weight”

Dual-Use Foundation AI Models

            Also on February 26, the National Telecommunications and Information Administration (“NTIA”) published a request for comments on the risks, benefits, and possible regulation of “dual-use foundation models for which the model weights are widely available.”  Among other questions raised by NTIA in the document are whether the availability of public model weights could pose risks to infrastructure or the defense sector.  NTIA is seeking comments in order to prepare a report that the AI EO requires by July 26, 2024 on the risks and benefits of private companies making the weights of their foundational AI models publicly available.  NTIA’s request for comments notes that “openness” or “wide availability” are terms without clear definition, and that “more information [is] needed to detail the relationship between openness and the wide availability of both model weights and open foundation models more generally.”  NTIA also requests comments on potential regulatory regimes for dual-use foundation models with widely available model weights, as well as the kinds of regulatory structures “that could deal with not only the large scale of these foundation models, but also the declining level of computing resources needed to fine-tune and retrain them.”Continue Reading February 2024 Developments Under President Biden’s Cybersecurity Executive Order, National Cybersecurity Strategy, and AI Executive Order

This year’s Munich Security Conference reemphasized the need for Europe to invest in greater defense capabilities and foster a regulatory environment that is conducive to building a defense and technological industrial base. In Munich, President Ursula von der Leyen committed to appointing a European Commissioner for Defence, if she is reselected later this year by the European Council and European Parliament. And the EU is also due to publish shortly a new defense industrial strategy, mirroring in part, the first-ever U.S. National Defense Industrial Strategy (NDIS) released earlier this year by the Department of Defense.

The NDIS, in turn, recognizes the need for a strong defense industry in both the U.S. and the EU, as well as other allies and partners across the globe, in order to strengthen supply chain resilience and ensure the production and delivery of critical defense supplies. And global leaders generally see the imperative of working together over the long-term to advance integrated deterrence policies and to strengthen and modernize defense industrial base ecosystems. We will continue tracking these geopolitical trends, which are likely to persist regardless of electoral outcomes in Europe or the United States.

These developments across both sides of the Atlantic follow on a number of significant new funding streams in Europe over the past couple of years, for instance:

  • The 2021 revision of the European Defense Fund Regulation allocated €8 billion for common research and development projects, meant to be spent during the 2021-2027 multi-annual financial framework (MFF).
  • As a direct response to Ukraine’s request for assistance with the supply of 155 mm-caliber artillery rounds, the EU adopted the 2023 Act in Support of Ammunition Production (ASAP), with a €500 million fund to scale up production of ammunition and missiles.
  • Most recently, the EU adopted the 2023 European Defense Industry Reinforcement through Common Procurement Act (EDIRPA), introduced a joint procurement fund of €300 million to facilitate Member States’ collective acquisition of defense products.
  • The European Peace Facility (EPF), an off-budget instrument, with an overall financial ceiling exceeding €12 billion, is primarily destined toward procurement of military material and large-scale financing of weapon supplies to allied third countries (including €6.1 billion for Ukraine).

Continue Reading Insights from the Munich Security Conference: Towards an Expanding U.S.-EU Defense Taxonomy?

On 26 January 2024, the European Medicines Agency (EMA) announced that it has received a €10 million grant from the European Commission to support regulatory systems in Africa, and in particular for the setting up of the African Medicines Agency (AMA). Although still in its early stages as an agency, AMA shows significant promise to harmonize the regulatory landscape across the continent in order to improve access to quality, safe and efficacious medical products in Africa. Other key organizations who are working to establish and implement the vision set out for AMA include the African Union (AU), comprising of 55 member states in Africa, the African Union Development Agency (AUDA-NEPAD) and the World Health Organization (WHO). Of importance, AMA is expected to play an important role in facilitating intra-regional trade for pharmaceuticals in the context of the Africa Continental Free Trade Area (AfCFTA).

Background to AMA and medicines regulation in Africa

Africa currently has limited harmonization of medicines regulation between jurisdictions. The functionality and regulatory capacity of national medicines regulatory authorities varies significantly. For example, many national regulators lack the technical expertise to independently assess innovative marketing authorization applications and instead adopt “reliance” procedures, whereby authorization by a foreign stringent regulatory authority or registration as a WHO pre-qualified product may be a condition for approval. Pharmaceutical manufacturers seeking to conduct multinational clinical trials or launch their products across Africa can often face challenges when navigating the divergent requirements for each country (and can face additional delays during each approval process).

Multiple initiatives in the last decade have aimed to increase the harmonization of medicines regulation across Africa with varying degrees of success, such as:Continue Reading EMA announces €10 million of funding to support the establishment of the African Medicines Agency

From February 17, 2024, the Digital Services Act (“DSA”) will apply to providers of intermediary services (e.g., cloud services, file-sharing services, search engines, social networks and online marketplaces). These entities will be required to comply with a number of obligations, including implementing notice-and-action mechanisms, complying with detailed rules on terms and conditions, and publishing transparency reports on content moderation practices, among others. For more information on the DSA, see our previous blog posts here and here.

As part of its powers conferred under the DSA, the European Commission is empowered to adopt delegated and implementing acts* on certain aspects of implementation and enforcement of the DSA. In 2023, the Commission adopted one delegated act on supervisory fees to be paid by very large online platforms and very large online search engines (“VLOPs” and “VLOSEs” respectively), and one implementing act on procedural matters relating to the Commission’s enforcement powers. The Commission has proposed several other delegated and implementing acts, which we set out below. The consultation period for these draft acts have now passed, and we anticipate that they will be adopted in the coming months.

Pending Delegated Acts

  • Draft Delegated Act on Conducting Independent Audits. This draft delegated act defines the steps that designated VLOPs and VLOSEs will need to follow to verify the independence of the auditors, particularly setting the rules for the procedures, methodology and templates used. According to the draft delegated act, designated VLOPS and VLOSEs should be subject to their first audit at the latest 16 months after their designation. The consultation period for this draft delegated act ended on June 2, 2023.
  • Draft Delegated Act on Data Access for Research. This draft delegated act specifies the conditions under which vetted researchers may access data from VLOPs and VLOSEs. The consultation period for this draft delegated act ended on May 31, 2023.

Continue Reading Draft Delegated and Implementing Acts Pursuant to the Digital Services Act

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023

Earlier this month, the New York Department of Financial Services (“NYDFS”) announced that it had finalized the Second Amendment to its “first-in-the-nation” cybersecurity regulation, 23 NYCRR Part 500.  This Amendment implements many of the changes that NYDFS originally proposed in prior versions of the Second Amendment released for public comment in November 2022 and June 2023, respectively.  The first version of the Proposed Second Amendment proposed increased cybersecurity governance and board oversight requirements, the expansion of the types of policies and controls companies would be required to implement, the creation of a new class of companies subject to additional requirements, expanded incident reporting requirements, and the introduction of enumerated factors to be considered in enforcement decisions, among others.  The revisions in the second version reflect adjustments rather than substantial changes from the first version.  Compliance periods for the newly finalized requirements in the Second Amendment will be phased over the next two years, as set forth in additional detail below.

The finalized Second Amendment largely adheres to the revisions from the second version of the Proposed Second Amendment but includes a few substantive changes, including those described below:

  • The finalized Amendment removes the previously-proposed requirement that each class A company conduct independent audits of its cybersecurity program “at least annually.”  While the finalized Amendment does require each class A company to conduct such audits, they should occur at a frequency based on its risk assessments.  NYDFS stated that it made this change in response to comments that an annual audit requirement would be overly burdensome and with the understanding that class A companies typically conduct more than one audit annually.  See Section 500.2 (c).
  • The finalized Amendment updates the oversight requirements for the senior governing body of a covered entity with respect to the covered entity’s cybersecurity risk management.  Updates include, among others, a requirement to confirm that the covered entity’s management has allocated sufficient resources to implement and maintain a cybersecurity program.  This requirement was part of the proposed definition of “Chief Information Security Officer.”  NYDFS stated that it moved this requirement to the senior governing bodies in response to comments that CISOs do not typically make enterprise-wide resource allocation decisions, which are instead the responsibility of senior management.  See Section 500.4 (d).
  • The finalized Amendment removes a proposed additional requirement to report certain privileged account compromises to NYDFS.  NYDFS stated that it did so in response to public comments that this proposed requirement “is overbroad and would lead to overreporting.”  However, the finalized Amendment retains previously-proposed changes that will require covered entities to report certain ransomware deployments or extortion payments to NYDFS.  See Section 500.17 (a).

Continue Reading New York Department of Financial Services Finalizes Second Amendment to Cybersecurity Regulation

Yesterday, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. 

Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:

  • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.

Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and also to take steps to

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.

Artificial Intelligence

AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023.  Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI.  Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation.  The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here.  There were also a number of AI legislative proposals introduced this quarter.  Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems.  Other proposals focus on mandating disclosures of AI systems.  For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage.  Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”

There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:

  • White House:  The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here.  The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
  • CFPB:  The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
  • FTC:  The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
  • HHS Office of National Coordinator for Health IT:  This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies.  The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI.  Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.

Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023