Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its proposal for the Act.

Here’s what lies ahead:

  • Language finalization: Before the Act can officially become law, it will undergo a review by lawyer-linguists (referred to as the corrigendum procedure). This step aims to identify and correct errors in the text and ensure that numeration and references (to both internal and external sources) are correct before the text’s publication in the Official Journal of the EU (“OJ”).
  • Council approval: The Act is next set to move to the Council of the EU for its formal, final endorsement, which is expected to take place in April.
  • Implementation and impact:  The AI Act will officially enter into force 20 days after its publication in the OJ. The Act’s provisions on prohibited AI practices will apply six months following the Act’s entry into force, while the provisions on general-purpose AI models will apply six months thereafter.  Other provisions will apply later, primarily two and three years after the Act enters into force.  

The adoption of the AI Act represents a key moment in the global discourse on how best to regulate AI technologies. It coincides with efforts in other jurisdictions to support the development of safe AI, including the Biden Administration’s Artificial Intelligence Executive Order and a battery of regulatory initiatives in China.

The formal adoption of the Act is not the end of the regulatory process. Member States will need to appoint national competent authorities to oversee its implementation in their jurisdictions, while the Commission must issue guidelines to help regulated actors interpret and apply a large number of provisions. Stay tuned for further updates as the AI Act progresses through its final stages.

*                      *                      *

The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.

On 20 February, 2024, the Governments of the UK and Australia co-signed the UK-Australia Online Safety and Security Memorandum of Understanding (“MoU”). The MoU seeks to serve as a framework for the two countries to jointly deliver concrete and coordinated online safety and security policy initiatives and outcomes to support their citizens, businesses and economies.

The MoU comes shortly after the UK Information Commissioner’s Office (“ICO”) introduced its guidance on content moderation and data protection (see our previous blog here) to complement the UK’s Online Safety Act 2023, and the commencement of the Australian online safety codes, which complement the Australian Online Safety Act 2021.

The scope of the MoU is broad, covering a range of policy areas, including: harmful online behaviour; age assurance; safety by design; online platforms; child safety; technology-facilitated gender-based violence; safety technology; online media and digital literacy; user privacy and freedom of expression; online child sexual exploitation and abuse; terrorist and violent extremist content; lawful access to data; encryption; misinformation and disinformation; and the impact of new, emerging and rapidly evolving technologies such as artificial intelligence (“AI”).

Continue Reading UK and Australia Agree Enhanced Cross-Border Cooperation in Online Safety and Security

On February 28, the European Data Protection Board (“EDPB”) announced that EU supervisory authorities (“SAs”) will undertake a coordinated enforcement action in 2024 regarding data subjects’ right of access under the GDPR.  For context, the EDPB selects a particular topic each year to serve as the focus for pan-EU coordinated enforcement.

In 2023, regulators focused upon data protection officers’ designation and role.  And, on January 17, 2024, the EDPB published its report providing an overview of the actions SAs took in the context of the 2023 action.  This blog post provides an overview of what you can expect from the coordinated enforcement action in 2024, based on the lessons learned from 2023.

If this year’s coordinated enforcement action is similar to last year’s, this is what to expect:

  • SAs will likely send questionnaires to specific industry sectors or controllers with related data processing activities and assess varying numbers of stakeholders.  Organizations can expect to receive questionnaires from SAs using mechanisms comparable to those used in 2023.  SAs, for instance, may send questionnaires requesting information from a specific group of organizations representing a particular sector in order to better track their responses (e.g., the banking sector or public entities), or to a broader group of companies that share similar data processing activities in order to get a general view of their compliance status and the challenges they face in addressing data subjects’ access requests (“DSARs”).
  • The questionnaires are likely to contain a wide range of questions about how companies comply with the right of access.  Organizations can expect to receive a questionnaire that uses a pre-agreed structure, although some SAs may decide to slightly modify the questionnaire or add certain questions.  The questionnaire is likely to have limited space for responses and/or no open-field questions.  We expect the questions to be broad enough to capture information on, for example: (i) how organizations inform data subjects about their right of access; (ii) how easy it is for data subjects to exercise their right of access; (iii) what processes organizations have for responding to DSARs; (iv) how organizations ensure they meet the GDPR’s deadline for responding to DSARs; (v) how organizations train employees who handle DSARs; (vi) the involvement of DPOs in responding to DSARs; and (vii) situations in which organizations have rejected DSARs.  The questionnaire may also ask whether controllers have been asked by data subjects to identify all parties to which controllers have disclosed data (i.e., data recipients) and how controllers have responded to such a request.  That is because in 2023 the CJEU ruled that when data subjects specifically request information about data recipients in a DSAR, controllers should identify all data recipients by name, except in very exceptional circumstances where this is impossible, or manifestly unfounded or excessive (see our blog post about case C-154/21).
  • Responding to the questionnaire may be on a voluntary or mandatory basis, and may (or may not) lead to enforcement.  For some SAs, the coordinated enforcement action may simply be used to understand stakeholders’ concerns to develop more useful guidelines and materials on the right of access.  Other SAs may use the responses to the questionnaire to instigate enforcement action against respondents.  For example, the French CNIL and Dutch AP have indicated that the right of access is one of their enforcement priorities for 2024.  This may indicate that they intend to take enforcement actions against companies that, based on the coordinated action, do not appear to be complying with their GDPR obligations regarding DSARs.

*           *           *

Covington’s Data Privacy and Cybersecurity team regularly advises companies on their most challenging compliance issues in the EU and other key markets, including data subject rights.  Our team is happy to assist companies with completing SA’s questionnaires and any other inquiries related to data privacy and cybersecurity.

(This blog post was written with the contributions of Diane Valat.)

This year’s Munich Security Conference reemphasized the need for Europe to invest in greater defense capabilities and foster a regulatory environment that is conducive to building a defense and technological industrial base. In Munich, President Ursula von der Leyen committed to appointing a European Commissioner for Defence, if she is reselected later this year by the European Council and European Parliament. And the EU is also due to publish shortly a new defense industrial strategy, mirroring in part, the first-ever U.S. National Defense Industrial Strategy (NDIS) released earlier this year by the Department of Defense.

The NDIS, in turn, recognizes the need for a strong defense industry in both the U.S. and the EU, as well as other allies and partners across the globe, in order to strengthen supply chain resilience and ensure the production and delivery of critical defense supplies. And global leaders generally see the imperative of working together over the long-term to advance integrated deterrence policies and to strengthen and modernize defense industrial base ecosystems. We will continue tracking these geopolitical trends, which are likely to persist regardless of electoral outcomes in Europe or the United States.

These developments across both sides of the Atlantic follow on a number of significant new funding streams in Europe over the past couple of years, for instance:

  • The 2021 revision of the European Defense Fund Regulation allocated €8 billion for common research and development projects, meant to be spent during the 2021-2027 multi-annual financial framework (MFF).
  • As a direct response to Ukraine’s request for assistance with the supply of 155 mm-caliber artillery rounds, the EU adopted the 2023 Act in Support of Ammunition Production (ASAP), with a €500 million fund to scale up production of ammunition and missiles.
  • Most recently, the EU adopted the 2023 European Defense Industry Reinforcement through Common Procurement Act (EDIRPA), introduced a joint procurement fund of €300 million to facilitate Member States’ collective acquisition of defense products.
  • The European Peace Facility (EPF), an off-budget instrument, with an overall financial ceiling exceeding €12 billion, is primarily destined toward procurement of military material and large-scale financing of weapon supplies to allied third countries (including €6.1 billion for Ukraine).
Continue Reading Insights from the Munich Security Conference: Towards an Expanding U.S.-EU Defense Taxonomy?

On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI.  Rep. Jay Olbernolte (R-CA) will chair the task force, joined by Rep. Ted Lieu (D-CA) as co-chair.  Several other senior members of the California delegation, including Rep. Darrell Issa (R-CA) and retiring Rep. Anna Eshoo (D-CA), will participate in the effort as well.

So far, much of the congressional activity on AI has taken place in the Senate.  Majority Leader Chuck Schumer (D-NY) convened a bipartisan working group last spring, and Senate committees have held more than 30 hearings on the topic.  Legislation is moving as a result: AI-related bills including the Transparent Automated Governance (TAG) Act (S. 1865), AI Leadership To Enable Accountable Deployment (AI LEAD) Act (S. 2293), and AI Leadership Training Act (S. 1564)—all sponsored by Sen. Gary Peters (D-MI)—have moved through committee, though no comprehensive AI legislation has yet become law this Congress.

House Task Force member Rep. Don Beyer (D-VA), who is pursuing a graduate degree in machine learning at George Mason University, announced the new working group recently.  He outlined ambitious goals for the group, including drafting, if not passing, as many as ten “major AI bills” in 2024.  Beyer noted that the task force will prioritize the bipartisan and bicameral Creating Resources for Every American To Experiment with Artificial Intelligence (CREATE AI) Act (S. 2714/H.R. 5077) to promote safe, innovative AI research in the United States.  He has personally sponsored or cosponsored several AI bills this Congress, including the AI Foundation Model Transparency Act (H.R. 6881), the Artificial Intelligence Environmental Impacts Act (H.R. 7197), the Federal Artificial Intelligence Risk Management Act of 2024 (H.R. 6936), and the Block Nuclear Launch by Autonomous Artificial Intelligence Act (H.R. 2894).

These bills are among the more than 30 comprehensive and targeted AI bills that members have introduced this Congress to foster transparency, protect against fake or misleading content, bolster national security, and otherwise promote AI leadership or regulate AI technology.

The creation of the House Task Force with bipartisan buy-in from leadership may signal renewed momentum on AI regulation this Congress.  Yet the prospect of comprehensive federal AI legislation passing through either chamber of Congress—much less becoming law—in a presidential election year remains uncertain, despite AI remaining a major priority for policymakers at all levels.  Executive agencies continue to implement the Biden Administration’s comprehensive executive order to promote responsible AI development, and we expect states to continue to adopt their own AI legislation, particularly as the technology advances. 

On 26 January 2024, the European Medicines Agency (EMA) announced that it has received a €10 million grant from the European Commission to support regulatory systems in Africa, and in particular for the setting up of the African Medicines Agency (AMA). Although still in its early stages as an agency, AMA shows significant promise to harmonize the regulatory landscape across the continent in order to improve access to quality, safe and efficacious medical products in Africa. Other key organizations who are working to establish and implement the vision set out for AMA include the African Union (AU), comprising of 55 member states in Africa, the African Union Development Agency (AUDA-NEPAD) and the World Health Organization (WHO). Of importance, AMA is expected to play an important role in facilitating intra-regional trade for pharmaceuticals in the context of the Africa Continental Free Trade Area (AfCFTA).

Background to AMA and medicines regulation in Africa

Africa currently has limited harmonization of medicines regulation between jurisdictions. The functionality and regulatory capacity of national medicines regulatory authorities varies significantly. For example, many national regulators lack the technical expertise to independently assess innovative marketing authorization applications and instead adopt “reliance” procedures, whereby authorization by a foreign stringent regulatory authority or registration as a WHO pre-qualified product may be a condition for approval. Pharmaceutical manufacturers seeking to conduct multinational clinical trials or launch their products across Africa can often face challenges when navigating the divergent requirements for each country (and can face additional delays during each approval process).

Multiple initiatives in the last decade have aimed to increase the harmonization of medicines regulation across Africa with varying degrees of success, such as:

Continue Reading EMA announces €10 million of funding to support the establishment of the African Medicines Agency

On February 9, the Third Appellate District of California vacated a trial court’s decision that held that enforcement of the California Privacy Protection Agency’s (“CPPA”) regulations could not commence until one year after the finalized date of the regulations.  As we previously explained, the Superior Court’s order prevented the CPPA from enforcing the regulations it finalized on March 29, 2023 until March 29, 2024.  However, the Appellate court held that “because there is no ‘explicit and forceful language’ mandating that the [CPPA] is prohibited from enforcing the [California Consumer Privacy Act (“CCPA”)] until (at least) one year after the [CPPA] approves final regulations, the trial court erred in concluding otherwise.” 

The Appellate court acknowledged that the CPPA failed to meet its statutory deadline (i.e., July 1, 2022) for adopting final regulations and that the statute provided an enforcement date of one year after this deadline, but nonetheless concluded that the CCPA does not require a “one-year delay” between the CPPA’s approval of a final regulation and the CPPA’s authority to enforce that regulation.  The Appellate court noted that there “are other tools” to protect relevant interests, such as the CPPA’s regulation that, in deciding to pursue an investigation, it will consider “all facts it determines to be relevant, including the amount of time between the effective date of the statutory or regulatory requirement(s) and the possible or alleged violation(s) of those requirements, and good-faith efforts to comply with those requirements.”

In a statement released by the CPPA shortly after the order, the Deputy Director of Enforcement for the CPPA said that “[t]his decision should serve as an important reminder to the regulated community: now would be a good time to review your privacy practices to ensure full compliance with all of our regulations.”

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

On the sidelines of November’s APEC meetings in San Francisco, Presidents Joe Biden and Xi Jinping agreed that their nations should cooperate on the governance of artificial intelligence. Just weeks prior, President Xi unveiled China’s Global Artificial Intelligence Governance Initiative to world leaders, the nation’s bid to put its stamp on the global governance of AI. This announcement came a day after the Biden Administration revealed another round of restrictions on the export of advanced AI chips to China.

China is an AI superpower. Projections suggest that China’s AI market is on track to exceed US$14 billion this year, with ambitions to grow tenfold by 2030. Major Chinese tech companies have unveiled over twenty large language models (LLMs) to the public, and more than one hundred LLMs are fiercely competing in the market.

Understanding China’s capabilities and intentions in the realm of AI is crucial for policymakers in the U.S. and other countries to craft effective policies toward China, and for multinational companies to make informed business decisions. Irrespective of political differences, as an early mover in the realm of AI policy and regulation, China can serve as a repository of pioneering experiences for jurisdictions currently reflecting on their policy responses to this transformative technology.

This article aims to advance such understanding by outlining key features of China’s emerging approach toward AI.

Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

Our December blog, examined the optimism at the end of last year that a way could be found out of the political deadlock that has paralysed the Northern Ireland Assembly for the last two years. As our blog noted, although those hopes did not materialize, the fact that the discussions had reached such an advanced stage suggested that a solution might be found in the New Year.  The announcement by the Democratic Unionist Party (DUP) on 30 January that a Deal had been reached seems to justify that optimism.

A Historical Recap

The 1998 Belfast Good Friday Agreement (GFA) brought an end to 30 years of ‘The Troubles’.  It struck a delicate balance between the competing interests of the Unionist and Nationalist communities in Northern Ireland.  Key to its success was the removal of border infrastructure between Northern Ireland and the Republic of Ireland, and the creation of a Power Sharing Executive (PSE) for Northern Ireland.  The PSE allocates the position of First Minister to the largest political party in Northern Ireland, and the position of Deputy First Minister to the second largest.  Other than the status implied by the titles, there is very little practical difference between the two roles.

Northern Ireland’s parliament, the Stormont Assembly, only actually sat for any extended period of time between 2007-2017, but, until the last set of elections, the DUP had always held the position of First Minister.

Brexit and Northern Ireland

Northern Ireland voted by 56:44 to remain in the EU in the 2016 Brexit referendum.  The Unionist community largely voted ‘Leave’, believing it would consolidate Northern Ireland’s position within the UK; the Nationalist community generally voted ‘Remain’ for the opposite reason. 

Continue Reading The DUP and The Deal: Power-Sharing Returns to N Ireland

U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key themes in state AI bills introduced in the past year.  Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.

  • Notice Requirements:  A number of state AI bills focus on notice to individuals.  Some bills would require covered entities to notify individuals when using automated decision-making tools for decisions that affect their rights and opportunities, such as the use of AI in employment.  For example, the District of Columbia’s “Stop Discrimination by Algorithms Act” (B 114) would require a notice about how the covered entity uses personal information in algorithmic eligibility determinations, including providing information about the source of information, and it would require a separate notice to an individual affected by an algorithmic eligibility determination that results in an “adverse action.”  Similarly, the Massachusetts “Act Preventing a Dystopian Work Environment” (HB 1873) likewise would require employers or vendors using an automated decision system to provide notice to workers prior to adopting the system and would require an additional notice if there are “significant updates or changes” made to the system.  Additionally, other AI bills have focused on disclosure requirements between entities in the AI ecosystem.  For example, Washington’s legislature is considering a bill (HB 1951) that would require developers of automated decision tools to provide documentation of the “known limitations” of the tool, the types of data used to program or train the tool, and how the tool was evaluated for validity to deployers of the tool.
  • Impact Assessments:  Another key theme in state AI bills focuses on requirements for impact assessments in the development of AI tools; calls for these assessments aim to mitigate potential discrimination, privacy, and accuracy harms.  For example, a Vermont bill (HB 114) would require employers using automated decision-making tools to conduct algorithmic impact assessments prior to using those tools for employment-related decisions.  Additionally, the bill mentioned above under consideration in the Washington legislature (HB 1951) would require that deployers complete impact assessments for automated decision tools that include, for example, assessments of reasonably foreseeable risks of algorithmic decision making and the safeguards implemented.
  • Individual Rights:  State legislatures also have sought to implement requirements for consumers to exercise certain rights in AI bills.  For example, several state AI bills would establish an individual right to opt-out of decisions based on automated decision-making or request a human reevaluation of such decisions.  California (AB 331) and New York (AB 7859) are considering bills that would require AI deployers to allow individuals to request “alternative selection processes” where an automated decision tool is being used to make, or is a controlling factor in, a consequential decision.  Similarly, New York’s AI Bill of Rights (S 8209) would provide individuals with the right to opt-out of  the use of automated systems in favor of a human alternative. 
  • Licensing & Registration Regimes:  A handful of state legislatures have proposed requirements for AI licensing and registration.  For example, New York’s Advanced AI Licensing Act (A 8195) would require all developers and operators of certain “high-risk advanced AI systems” to apply for a license from the state before use.  Other bills require registration for certain uses of the AI system.  For instance, an amendment introduced in the Illinois legislature (HB 1002) would require state certification of diagnostic algorithms used by hospitals.
  • Generative AI & Content Labeling:  Another prominent theme in state AI legislation has been a focus on labeling content produced by generative AI systems.  For example, Rhode Island is considering a bill (H 6286) that would require a “distinctive watermark” to authenticate generative AI content.

We will continue to monitor these and related developments across our blogs.