Regulatory

Earlier this month, the New York Department of Financial Services (“NYDFS”) announced that it had finalized the Second Amendment to its “first-in-the-nation” cybersecurity regulation, 23 NYCRR Part 500.  This Amendment implements many of the changes that NYDFS originally proposed in prior versions of the Second Amendment released for public comment in November 2022 and June 2023, respectively.  The first version of the Proposed Second Amendment proposed increased cybersecurity governance and board oversight requirements, the expansion of the types of policies and controls companies would be required to implement, the creation of a new class of companies subject to additional requirements, expanded incident reporting requirements, and the introduction of enumerated factors to be considered in enforcement decisions, among others.  The revisions in the second version reflect adjustments rather than substantial changes from the first version.  Compliance periods for the newly finalized requirements in the Second Amendment will be phased over the next two years, as set forth in additional detail below.

The finalized Second Amendment largely adheres to the revisions from the second version of the Proposed Second Amendment but includes a few substantive changes, including those described below:

  • The finalized Amendment removes the previously-proposed requirement that each class A company conduct independent audits of its cybersecurity program “at least annually.”  While the finalized Amendment does require each class A company to conduct such audits, they should occur at a frequency based on its risk assessments.  NYDFS stated that it made this change in response to comments that an annual audit requirement would be overly burdensome and with the understanding that class A companies typically conduct more than one audit annually.  See Section 500.2 (c).
  • The finalized Amendment updates the oversight requirements for the senior governing body of a covered entity with respect to the covered entity’s cybersecurity risk management.  Updates include, among others, a requirement to confirm that the covered entity’s management has allocated sufficient resources to implement and maintain a cybersecurity program.  This requirement was part of the proposed definition of “Chief Information Security Officer.”  NYDFS stated that it moved this requirement to the senior governing bodies in response to comments that CISOs do not typically make enterprise-wide resource allocation decisions, which are instead the responsibility of senior management.  See Section 500.4 (d).
  • The finalized Amendment removes a proposed additional requirement to report certain privileged account compromises to NYDFS.  NYDFS stated that it did so in response to public comments that this proposed requirement “is overbroad and would lead to overreporting.”  However, the finalized Amendment retains previously-proposed changes that will require covered entities to report certain ransomware deployments or extortion payments to NYDFS.  See Section 500.17 (a).


Continue Reading New York Department of Financial Services Finalizes Second Amendment to Cybersecurity Regulation

Yesterday, the White House issued a Fact Sheet summarizing its Executive Order on a comprehensive strategy to support the development of safe and secure artificial intelligence (“AI”).  The Executive Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and voluntary commitments from certain developers of AI systems.  According to the Administration, the Executive Order establishes new AI safety and security standards, protects privacy, advances equity and civil rights, protects workers, consumers, and patients, promotes innovation and competition, and advances American leadership.  This blog post summarizes these key components.

  • Safety & Security StandardsThe Executive Order sets out several required actions for developers of AI systems.  Notably, the White House, “in accordance with the Defense Production Action,” will require companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government. 

Relatedly, the Executive Order directs certain federal agencies to undertake the following actions and initiatives:

  • National Institute of Standards and Technology:  establish standards for red-teaming required before the public release of an AI system. 
    • Department of Homeland Security:  apply the NIST standards to use of AI in critical infrastructure sectors and establish an AI Safety and Security Board. 
    • Departments of Energy and Homeland Security:  address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks; it also calls for the creation of standards for biological synthesis screening.
    • Department of Commerce:  develop guidance for content authentication and watermarking to label content generated by AI and received by the government; it also suggests that federal agencies would be required to use these tools.
    • National Security Council & White House Chief of Staff:  develop a National Security Memorandum that ensures that the United States military and intelligence community use AI safely, ethically, and effectively.


Continue Reading Biden Administration Announces Artificial Intelligence Executive Order

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and also to take steps to

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.

Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.

Artificial Intelligence

AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023.  Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI.  Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation.  The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here.  There were also a number of AI legislative proposals introduced this quarter.  Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems.  Other proposals focus on mandating disclosures of AI systems.  For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage.  Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”

There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:

  • White House:  The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here.  The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
  • CFPB:  The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
  • FTC:  The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
  • HHS Office of National Coordinator for Health IT:  This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies.  The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI.  Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.


Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023

On 10 July 2023, the European Commission (the “Commission”) adopted the Implementing Regulation (“IR”) for the European Union (“EU”) Foreign Subsidies Regulation (“FSR”). The FSR, which starts to apply today, 12 July 2023, creates a new instrument designed to prevent foreign subsidies from distorting the EU internal market (see our blog). The objective is to level the playing field within EU markets between companies subject to scrutiny under the EU State aid rules and companies receiving subsidies from non-EU Member States.

To attain this objective, the FSR empowers the Commission to assess foreign subsidies either on its own motion or after the notification of concentrations or public procurement tenders in the EU where certain thresholds are exceeded. Foreign subsidies are financial contributions (i.e. any value transfer) granted by non-EU countries, or entities whose action can be attributed to a non-EU country (i.e. foreign financial contributions or “FFC”) that confer a benefit that is not available on the market, specifically to one or to several companies or industries. Where foreign subsidies are problematic, this assessment may lead to remedies and even to the prohibition of the concentration or of the award of a public contract. Although the FSR starts to apply on 12 July 2023, allowing the Commission to investigate foreign subsidies on its own motion, the notification obligations only kick in on 12 October 2023. That means that notification may be requested for transactions signed after 12 July but not closed by 12 October and for public procurement procedures initiated after 12 July.

The purpose of the IR is to set out the rules applicable to proceedings conducted by the Commission under the FSR, including the submission of notifications.

Key things you need to know about the IR and the notification obligations:

  • The IR enacts the forms that notifying parties will have to complete and submit to the Commission in the context of concentrations and public procurement tenders.
  • The Commission must review foreign subsidies within statutory time limits that start to run as soon as the notification is complete and that may be suspended to obtain further information.
  • Detailed information must be submitted for FFCs that are considered to fall into the most distortive categories of foreign subsidies, whereas aggregate information must be provided for most other FFCs.
  • Information must be provided for FFCs provided to all group entities of the party or parties involved.
  • Companies that are likely to be involved in large concentrations or public procurements would be well advised to prepare sufficiently well in advance to avoid delays in their clearance timeline.


Continue Reading The EU Foreign Subsidies Regulation starts to apply – what you need to know about the notification obligations

California recently passed a series of new regulations affecting its “pay-to-play” laws that limit political contributions by state and local government contractors and others involved in proceedings on contracts, licenses, permits, and other “entitlements for use” in the state.  These regulations implement changes to the law that took effect this year, which include applying the

On 26 June 2023, the International Sustainability Standards Board (the “ISSB”) issued its inaugural International Financial Reporting Standards (“IFRS”) Sustainability Disclosure Standards (the “Standards”), heralding progress in the development of a global baseline of sustainability-linked disclosures. The Standards build on the concepts that underpin the IFRS Accounting Standards, which are required in more than 140 jurisdictions, but notably not in the United States for domestic issuers subject to regulation by the Securities and Exchange Commission (“SEC”), which must apply US Generally Accepted Accounting Principles (“US GAAP”).  Despite broad investor appetite for  transparent, uniform and comparable disclosure rules, the scope of required sustainability disclosure and timing for adoption of the SEC’s pending climate disclosure rule remains unresolved.

  1. IFRS S1 General Requirements for Disclosure of Sustainability-related Financial Information (“IFRS S1”) requires an entity to disclose information about all sustainability-related risks and opportunities that could reasonably be expected to affect the entity’s prospects. The effect on the entity’s prospects refers to the effect on the entity’s cash flows, its access to finance, or cost of capital over the short, medium or long term.
  2. IFRS S2 Climate-related Disclosures (“IFRS S2”) requires an entity to provide information about its exposure to climate-related risks and opportunities. Information to be disclosed includes both physical risks—such as extreme weather events—as well as transition risks, such as changes in customer behaviour.

Both IFRS S1 and IFRS S2 are effective for annual reporting periods beginning on or after 1 January 2024. Accordingly, where the Standards have been adopted for a 2024 reporting cycle, relevant disclosures will begin to be published in 2025 in an entity’s general purpose financial reports (subject to transitional provisions), alongside an “explicit and unreserved statement of compliance” when disclosing against the Standards. Whilst the launch of the Standards has been a welcome step, seeking to provide greater uniformity in corporate reporting, individual jurisdictions will decide whether entities will be required to comply with the Standards.

Continue Reading ISSB issues inaugural global sustainability disclosure standards

Today, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal to develop legislation to promote and regulate artificial intelligence. In a speech at the Center for Strategic & International Studies, Leader Schumer remarked: “[W]ith AI, we cannot be ostriches sticking our heads in the sand. The question is: what role [do] Congress

2022 and 2023 may be remembered as pivotal years for efforts against so-called “greenwashing.”  In this article, we look at some recent developments in the regulation of “green claims” in the UK, the US, and the EU that corporates should be aware of.  We provide a broad summary and comparison snapshot of the UK, US and EU regimes to help companies navigate these rules.  Now is a critical time for companies to get up to speed: authorities in all three jurisdictions are focusing more and more intently on this issue; company reputations will increasingly rise and fall with the strength of their green claims, and national regulators are set to get new powers (including the power to levy significant fines) to tackle companies found in breach.

I.  Summary of recent developments: What’s new in greenwashing?

In January 2022, the UK’s Competition & Markets Authority (“CMA”) launched a sector‑by‑sector review of misleading environmental claims.  The CMA started with the fashion sector, and called out a number of high‑profile, fast‑fashion companies for their practices.  Twelve months later, the CMA announced that it was expanding the investigation to greenwashing around “household essentials”, including food, drink, toiletries and cleaning products.  The CMA’s review is the first concerted application of the CMA’s new Green Claims Code, published in September 2021, which gives guidance for any business (wherever based) making environmental claims in the UK.

Meanwhile, in December 2022, the US Federal Trade Commission’s (“FTC”) launched a review of the “Guides for the Use of Environmental Claims” (“Green Guides”), which was last updated in 2012.  The initial comment period closed on April 24, 2023.  The FTC plans to update the Green Guides to reflect developments in consumers’ perception of environmental marketing claims.  As a part of its ongoing review, the FTC also announced a workshop to examine recyclable claims.  The workshop is scheduled for May 23, 2023 and the public can submit comments on the subject of recyclable claims through June 13, 2023.  For more detail on the review, please see our dedicated blog post, here.

Finally, the EU has proposed two Directives to modernize and harmonize the rules on green claims across the bloc (together, the “EU Green Claims Proposals”).  Currently, EU law does not specifically regulate environmental claims.  Instead, environmental claims are subject only to general consumer protection and advertising rules (set out in Directive 2005/29 on Unfair Business-to-Consumer Practices and Directive 2006/114 on Comparative Advertising).  Admittedly, the EU has published guidance on interpreting and applying the general rules in the context of green claims (see the guidance here, and see our previous blog post discussing the guidance here).  However, in practice, EU Member States approach interpretation and enforcement in a variety of different ways.  On March 3, 2022, the European Commission published a Proposal for a Directive Empowering Consumers for the Green Transition, also known as the “Greenwashing Directive.”  The Greenwashing Directive amends the EU’s existing consumer protection rules, and bans a number of general green claims, such as “climate neutral” or “eco-friendly.”  It also imposes some rules on the use of non-environmental sustainability claims or “social impact” claims, such as “locally produced” or “fair labour.”  One year later, on March 22, 2023, the European Commission presented a Proposal for a Directive on Green Claims (“Green Claims Directive”), which we discussed here.  The Green Claims Directive proposes a new and strict framework, applicable to all companies operating in the EU/EEA, to harmonize the rules on the substantiation of voluntary green claims. 

Below, we outline the key aspects of the different legislative frameworks.

Continue Reading The Green Claims Global Drive: Developments in the UK, US and EU