U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key themes in state AI bills introduced in the past year.  Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.

  • Notice Requirements:  A number of state AI bills focus on notice to individuals.  Some bills would require covered entities to notify individuals when using automated decision-making tools for decisions that affect their rights and opportunities, such as the use of AI in employment.  For example, the District of Columbia’s “Stop Discrimination by Algorithms Act” (B 114) would require a notice about how the covered entity uses personal information in algorithmic eligibility determinations, including providing information about the source of information, and it would require a separate notice to an individual affected by an algorithmic eligibility determination that results in an “adverse action.”  Similarly, the Massachusetts “Act Preventing a Dystopian Work Environment” (HB 1873) likewise would require employers or vendors using an automated decision system to provide notice to workers prior to adopting the system and would require an additional notice if there are “significant updates or changes” made to the system.  Additionally, other AI bills have focused on disclosure requirements between entities in the AI ecosystem.  For example, Washington’s legislature is considering a bill (HB 1951) that would require developers of automated decision tools to provide documentation of the “known limitations” of the tool, the types of data used to program or train the tool, and how the tool was evaluated for validity to deployers of the tool.
  • Impact Assessments:  Another key theme in state AI bills focuses on requirements for impact assessments in the development of AI tools; calls for these assessments aim to mitigate potential discrimination, privacy, and accuracy harms.  For example, a Vermont bill (HB 114) would require employers using automated decision-making tools to conduct algorithmic impact assessments prior to using those tools for employment-related decisions.  Additionally, the bill mentioned above under consideration in the Washington legislature (HB 1951) would require that deployers complete impact assessments for automated decision tools that include, for example, assessments of reasonably foreseeable risks of algorithmic decision making and the safeguards implemented.
  • Individual Rights:  State legislatures also have sought to implement requirements for consumers to exercise certain rights in AI bills.  For example, several state AI bills would establish an individual right to opt-out of decisions based on automated decision-making or request a human reevaluation of such decisions.  California (AB 331) and New York (AB 7859) are considering bills that would require AI deployers to allow individuals to request “alternative selection processes” where an automated decision tool is being used to make, or is a controlling factor in, a consequential decision.  Similarly, New York’s AI Bill of Rights (S 8209) would provide individuals with the right to opt-out of  the use of automated systems in favor of a human alternative. 
  • Licensing & Registration Regimes:  A handful of state legislatures have proposed requirements for AI licensing and registration.  For example, New York’s Advanced AI Licensing Act (A 8195) would require all developers and operators of certain “high-risk advanced AI systems” to apply for a license from the state before use.  Other bills require registration for certain uses of the AI system.  For instance, an amendment introduced in the Illinois legislature (HB 1002) would require state certification of diagnostic algorithms used by hospitals.
  • Generative AI & Content Labeling:  Another prominent theme in state AI legislation has been a focus on labeling content produced by generative AI systems.  For example, Rhode Island is considering a bill (H 6286) that would require a “distinctive watermark” to authenticate generative AI content.

We will continue to monitor these and related developments across our blogs.

From February 17, 2024, the Digital Services Act (“DSA”) will apply to providers of intermediary services (e.g., cloud services, file-sharing services, search engines, social networks and online marketplaces). These entities will be required to comply with a number of obligations, including implementing notice-and-action mechanisms, complying with detailed rules on terms and conditions, and publishing transparency reports on content moderation practices, among others. For more information on the DSA, see our previous blog posts here and here.

As part of its powers conferred under the DSA, the European Commission is empowered to adopt delegated and implementing acts* on certain aspects of implementation and enforcement of the DSA. In 2023, the Commission adopted one delegated act on supervisory fees to be paid by very large online platforms and very large online search engines (“VLOPs” and “VLOSEs” respectively), and one implementing act on procedural matters relating to the Commission’s enforcement powers. The Commission has proposed several other delegated and implementing acts, which we set out below. The consultation period for these draft acts have now passed, and we anticipate that they will be adopted in the coming months.

Pending Delegated Acts

  • Draft Delegated Act on Conducting Independent Audits. This draft delegated act defines the steps that designated VLOPs and VLOSEs will need to follow to verify the independence of the auditors, particularly setting the rules for the procedures, methodology and templates used. According to the draft delegated act, designated VLOPS and VLOSEs should be subject to their first audit at the latest 16 months after their designation. The consultation period for this draft delegated act ended on June 2, 2023.
  • Draft Delegated Act on Data Access for Research. This draft delegated act specifies the conditions under which vetted researchers may access data from VLOPs and VLOSEs. The consultation period for this draft delegated act ended on May 31, 2023.
Continue Reading Draft Delegated and Implementing Acts Pursuant to the Digital Services Act

New Jersey and New Hampshire are the latest states to pass comprehensive privacy legislation, joining CaliforniaVirginiaColoradoConnecticutUtahIowaIndiana, Tennessee, Montana, OregonTexasFlorida, and Delaware.  Below is a summary of key takeaways. 

New Jersey

On January 8, 2024, the New Jersey state senate passed S.B. 332 (“the Act”), which was signed into law on January 16, 2024.  The Act, which takes effect 365 days after enactment, resembles the comprehensive privacy statutes in Connecticut, Colorado, Montana, and Oregon, though there are some notable distinctions. 

  • Scope and Applicability:  The Act will apply to controllers that conduct business or produce products or services in New Jersey, and, during a calendar year, control or process either (1) the personal data of at least 100,000 consumers, excluding personal data processed for the sole purpose of completing a transaction; or (2) the personal data of at least 25,000 consumers where the business derives revenue, or receives a discount on the price of any goods or services, from the sale of personal data. The Act omits several exemptions present in other state comprehensive privacy laws, including exemptions for nonprofit organizations and information covered by the Family Educational Rights and Privacy Act.
  • Consumer Rights:  Consumers will have the rights of access, deletion, portability, and correction under the Act.  Moreover, the Act will provide consumers with the right to opt out of targeted advertising, the sale of personal data, and profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.  The Act will require controllers to develop a universal opt out mechanism by which consumers can exercise these opt out rights within six months of the Act’s effective date.
  • Sensitive Data:  The Act will require consent prior to the collection of sensitive data. “Sensitive data” is defined to include, among other things, racial or ethnic origin, religious beliefs, mental or physical health condition, sex life or sexual orientation, citizenship or immigration status, status as transgender or non-binary, and genetic or biometric data.  Notably, the Act is the first comprehensive privacy statute other than the California Consumer Privacy Act to include financial information in its definition of sensitive data.  The Act defines financial information as an “account number, account log-in, financial account, or credit or debit card number, in combination with any required security code, access code, or password that would permit access to a consumer’s financial account.”
  • Opt-In Consent for Certain Processing of Personal Data Concerning Teens:  Unless a controller obtains a consumer’s consent, the Act will prohibit the controller from processing personal data for targeted adverting, sale, or profiling where the controller has actual knowledge, or willfully disregards, that the consumer is between the ages of 13 and 16 years old.
  • Enforcement and Rulemaking:  The Act grants the New Jersey Attorney General enforcement authority.  The Act also provides controllers with a 30-day right to cure for certain violations, which will sunset eighteen months after the Act’s effective date.  Like the comprehensive privacy laws in California and Colorado, the Act authorizes rulemaking under the state Administrative Procedure Act.  Specifically, the Act requires the Director of the Division of Consumer Affairs in the Department of Law and Public Safety to promulgate rules and regulations pursuant to the Administrative Procedure Act that are necessary to effectuate the Act’s provisions.  
Continue Reading New Jersey and New Hampshire Pass Comprehensive Privacy Legislation

On January 16, the attorneys general of 25 states – including California, Illinois, and Washington – and the District of Columbia filed reply comments to the Federal Communication Commission’s (FCC) November Notice of Inquiry on the implications of artificial intelligence (AI) technology for efforts to mitigate robocalls and robotexts. 

The Telephone Consumer Protection Act (TCPA) limits the conditions under which a person may lawfully make a telephone call using “an artificial or prerecorded voice.”  The reply comments call on the FCC to take the position that “any type of AI technology that generates a human voice should be considered an ‘artificial voice’ for purposes of the [TCPA].”  They further state that a more permissive approach would “act as a ‘stamp of approval’ for unscrupulous businesses seeking to employ AI technologies to inundate consumers with unwanted robocalls for which they did not provide consent[], all based on the argument that the business’s advanced AI technology acts as a functional equivalent of a live agent.”

On 24 January 2024, the European Commission (the “Commission”) published its European Economic Security Package (the “EESP”), which included the long-awaited proposal to reform the EU Regulation which established a framework for Foreign Direct Investment screening (the “EU FDI Regulation”). The EESP’s proposed regulation (the “Proposed Regulation”) is one of the EESP’s five initiatives to implement the European Security Strategy (published in June 2023) – for an overview of the EESP, see our Global Policy Watch blog.

The Proposed Regulation seeks to improve the legal framework for foreign investment screening in the European Union and builds upon feedback that the Commission received during its public consultation in 2023. If adopted as proposed, it will significantly change the landscape of foreign investment screening regimes across the EU (for a full report of the public consultation see here).

This blog highlights the key changes under the proposed reform and analyses their impact on global deal making. We also provide an outlook on the next steps for the proposals.

Key takeaways and comment

  • Extended scope to include indirect foreign investments through EU subsidiaries and greenfield investments.
  • Minimum standards and greater harmonisation across the EU.
  • Introduction of call-in powers to review all transactions for at least 15 months after completion.
  • Coordinated submission of foreign investment filings in multi-country transactions.
  • Focus cooperation between Member States and the Commission on cases more likely to be sensitive.
  • More prescriptive guidance on substantive assessments and remedies, including a formal obligation for national screening authorities to prohibit or impose conditions on transactions they conclude are likely to negatively affect security or public order in one or more Member States.
  • Increased reporting, while protecting confidential information.
Continue Reading Draft EU Screening Regulation – a new chapter for screening foreign direct investments in the EU

On January 24, 2024, the U.S. National Science Foundation (“NSF”) announced the launch of the National Artificial Intelligence Research Resource (“NAIRR”) pilot, a two-year initiative to develop a shared national research infrastructure for responsible AI discovery and innovation. The launch makes progress on a goal in President Biden’s recent Executive Order on AI safety and security that directs the NSF to launch a NAIRR pilot within 90 days.

The NAIRR pilot will broadly support AI-related research with an initial focus on the application of AI to societal challenges, including human health and environment and infrastructure sustainability.  To support researchers and educators, the NAIRR pilot also will compile AI resources such as pre-trained models, responsible AI toolkits, and industry-specific training data sets that are aligned with the NAIRR pilot goals. The NSF will partner with 10 other federal agencies as well as 25 private sector, nonprofit, and philanthropic organizations to implement the NAIRR pilot and improve its ecosystem over time.

The NSF has stated that it welcomes additional partners and will release a broader call for proposals from the research community in spring 2024.

On January 24, the EU Commission released a communication announcing the European Economic Security Package (EESP) – as trailed in our previous blog. The Communication, which implements the EU’s Strategy (published in June 2023), is aimed at strengthening the EU’s economic security in a number of areas:

  • improved screening of foreign investment into the EU;
  • greater export controls coordination;
  • identification of outbound investments risk;
  • enhanced support for research and development involving dual-use technologies;
  • upgraded research security.

The EESP comprises five measures – a legislative proposal; three white papers; and a Proposal for a Council Recommendation.

Proposed revision of the Regulation on the screening of Foreign Direct Investment

The proposed revisions to the screening framework would broaden the scope of the existing FDI Screening Regulation by:

  • extending screening to cover indirect foreign investment, including acquisitions by EU investors ultimately controlled by a non-EU country;
  • bringing certain greenfield investments within the scope of screening regimes;
  • ensuring all Member States have a screening mechanism in place, with better harmonised national rules (there are currently five Member States without complete foreign investment screening legislation); and
  • identifying minimum sectoral scope where Member States must screen foreign investments.

Notably, the proposed minimum scope for Member States’ screening regimes would encompass military, dual-use and medical sectors; projects identified as sensitive or as being of ‘Union interest’; and certain ‘critical technology’ sectors including- advanced semiconductors, artificial intelligence, biotechnologies, and quantum technologies.

The Commission also proposed requirements for greater coordination in the submission of foreign investment review filings across the EU, which could significantly impact transaction timelines. These proposals will be covered in more detail in a blog post on our Covington Competition blog.

White Paper on Outbound Investment

The non-binding nature of the White Paper reflects both the limited information available to substantiate perceptions that potential security risks could arise from EU outbound investment, that would not be addressed by existing tools and the tensions between the Commission and Member States’ positions on this issue (whilst the Commission stressed the need to scrutinize outbound EU investments to protect the EU’s security interests by preventing technology and know-how leakage, Member States feared the loss of sovereignty).

As a compromise, the White Paper proposes a detailed analysis of outbound investment; a three-month stakeholder consultation; and a 12-month monitoring and assessment of outbound investments at national level. The White Paper initially proposes that the monitoring phase focuses on the four ‘critical technology areas’ mentioned above. The assessment period will conclude with a joint risk assessment report, expected in Autumn 2025, which will enable the Commission to decide whether a more concrete policy response is required.

The Commission’s consideration of outbound investment screening follows similar moves by other countries to review or make enhancements to their capacity to intervene in such transactions – including the United States and the United Kingdom (where the Government has been undertaking engagement with industry stakeholders to understand and assess potential risks).

White Paper on Export Controls

The White Paper proposes the creation of a political coordination forum to help Member States reach common positions on export control matters, and proposes bringing forward the evaluation of the recast Dual-Use Regulation to the first quarter of 2025.

While the modernization of the EU export control regime effected by the adoption of the recast Dual-Use Regulation in 2021 is still relatively recent, the White Paper contemplates a need for further changes prompted by current geopolitical tensions, the continued pace of technological change, and the increased use of trade restrictions for foreign policy purposes by the EU and its partners.

In a notable departure from the EU’s previous position, the Commission will make a proposal to introduce controls at an EU level for items that ‘would have been adopted’ by multilateral regimes (such as Wassenaar Arrangement and the Missile Technology Control Regime) but have not been so-adopted because members of those regimes have blocked such revisions. 

The Commission will also adopt a Recommendation to encourage coordination within the EU on any new national control lists.  Particularly relevant for advanced and emerging technologies, these measures are designed to avoid divergence between Member States’ national controls, and to aid the EU in responding to third countries outside the EU that impose new controls unilaterally. However, the possibility that EU-level lists could further fragment the enforcement of multilateral regimes and undermine their implementation and effectiveness will remain controversial with industry and a consideration for Member States.

White Paper on enhancing support for research and development involving technologies with dual-use potential

The White Paper opens a consultation with public authorities, civil society, industry, and academia on options for strategic support for dual-use technology development aimed at maintaining a competitive edge in critical and emerging dual-use technologies.

The Paper also reviews the R & D support offered under current EU funding programs and identifies three potential options for the future:

  • no change to existing regimes;
  • removing the exclusive focus on civil applications in selected parts of the successor program to Horizon Europe; or
  • creating a dedicated instrument with a specific focus on dual-use R&D.

Proposal for a Council Recommendation on enhancing Research Security

The Commission recognizes that international tensions, combined with the increasing geopolitical relevance of research and innovation, mean that European researchers and academics are increasingly confronted with risks when cooperating internationally.  These risks may mean that research and innovation is targeted and used in ways that threaten EU security and facilitate undesirable transfer of critical technology.

The Proposal sets out a number of EU-level cooperation and coordination principles that should underpin all research security policies, such as academic freedom, institutional autonomy, and non-discrimination. The Proposal includes practical safeguarding measures that can be taken by the Member States and suggests the establishment of a European Centre of Expertise on Research Security as well as encouraging Member States to establish a policy framework for research security, including by incentivizing research centers to appoint research security advisers.

Comment

The EESP is another plank in the EU’s policy of Strategic Autonomy.  It gives legislative weight to the Commission President’s speeches in March and April 2023 (which focused on re-balancing the trading relationship with China) and the European Economic Security Strategy of June 2023 (aimed at creating a framework for assessing and addressing risks to EU economic security, while ensuring that the EU remained an open and attractive destination for business and investment).

The EESP brings the EU into line with the US approach of a more national security-focused approach to foreign investment – an approach which appears reciprocated in China. Whether this approach to economic security will continue depends to a large extent on the outcome of the European elections in June, which will shape not only the new Parliament, but also the Commission.

Covington’s international teams of policy and regulatory experts are well-placed to help and advise companies caught in the middle of this geopolitical and policy tussle, grappling with the competing demands of de-risking inward and outward Chinese investment without de-coupling trade. 

In December 2023, the Dutch SA fined a credit card company €150,000 for failure to perform a proper data protection impact assessment (“DPIA”) in accordance with Art. 35 GDPR for its “identification and verification process”.

First, the Dutch SA decided that the company was required to perform a DPIA because the processing met two of the nine conditions set out in the EDPB Guidelines on DPIAs.  In particular, the processing was large scale (1.5 million customers) and involved personal data that was sensitive or of a “very personal nature” (name, date of birth, place of birth, e-mail address, telephone number, gender, Netherlands government ID Number, number of the ID document and photo).

Second, the SA decided that the company’s impact assessment of its identification and verification process (which the company called a “Change Risk Assessment”) was not a valid DPIA because it was too focused on financial services regulations and did not sufficiently take into account data protection requirements, such as the necessity and proportionality of the processing.  The DPO was also not sufficiently involved in the assessment.


Covington’s Data Privacy and Cybersecurity team regularly advises companies on all aspects of their privacy compliance programs, including on data protection impact assessments.

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.
Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023

On December 5, 2023, the Spanish presidency of the Council of the EU issued a declaration to strengthen collaboration with Member States and the European Commission to develop a leading quantum technology ecosystem in Europe.

The declaration acknowledges the revolutionary potential of quantum computing, which uses quantum mechanics principles and quantum bits known as “qubits” to solve complex mathematical problems exponentially faster than classical computers.

The declaration was launched with eight Member State signatories (Denmark, Finland, Germany, Greece, Hungary, Italy, Slovenia, and Sweden), and invites other Member States to sign. By doing so, they agree to recognize the “strategic importance of quantum technologies for the scientific and industrial competitiveness of the EU” and commit to collaborating to make Europe the “’quantum valley’ of the world, the leading region globally for quantum excellence and innovation.

EU strategy on quantum computing

The declaration builds upon existing efforts to build quantum technology infrastructure in the EU, such as the Quantum Technologies Flagship that brings together research institutions, industry and public funders to develop commercial applications for quantum research, and the European High Performance Computing Joint Undertaking (EuroHPC JU) initiative to build state-of-the-art pilot quantum computers, both of which were launched in 2018.

Potential Impacts

The potential applications of quantum computing are wide-ranging and industry-agnostic. For instance, they could be used to enhance the analysis of large data sets, optimize supply-chain processes, and accelerate the development of machine-learning algorithms. While the technology is still nascent, its potential commercial impact is hard to overstate: a recent estimate by McKinsey suggests that the life sciences, chemicals, automotive and financial services industries alone stand to gain up to $1.3 trillion in value from quantum computing by 2035.

Given the potential applications, quantum computing could, in particular, have a significant impact on companies in the life sciences sector. To provide a few examples in the pharmaceutical R&D space, quantum computing could be potentially used to improve:

  • Drug discovery, by improving molecular design, predicting molecular interactions, and running molecular dynamic simulations.
  • Clinical development, by designing clinical trials, analyzing trial data and predicting adverse event reactions.
  • Diagnostics, by improving image analysis and reconstruction.
  • Therapy, by developing and optimizing treatment plans.
  • Manufacturing and supply chain processes, by optimizing them through risk modelling and data analysis.

However, the benefits are not without risks. Most significantly, there is a concern that in the future, quantum technologies may have the ability to solve the complex mathematical problems that underpin currently-used cryptography methods, posing a threat to modern encryption technology and cybersecurity.

It remains to be seen how the field of quantum computing will develop, and how its potential impacts will be seen and felt. Crucially, regulation will likely play a big role in managing its impact, both in the EU and beyond.

Covington is monitoring developments in this fast-growing area. Please reach out to a member of the team with any inquiries.