On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI.  Rep. Jay Olbernolte (R-CA) will chair the task force, joined by Rep. Ted Lieu (D-CA) as co-chair.  Several other senior members of the California delegation, including Rep. Darrell Issa (R-CA) and retiring Rep. Anna Eshoo (D-CA), will participate in the effort as well.

So far, much of the congressional activity on AI has taken place in the Senate.  Majority Leader Chuck Schumer (D-NY) convened a bipartisan working group last spring, and Senate committees have held more than 30 hearings on the topic.  Legislation is moving as a result: AI-related bills including the Transparent Automated Governance (TAG) Act (S. 1865), AI Leadership To Enable Accountable Deployment (AI LEAD) Act (S. 2293), and AI Leadership Training Act (S. 1564)—all sponsored by Sen. Gary Peters (D-MI)—have moved through committee, though no comprehensive AI legislation has yet become law this Congress.

House Task Force member Rep. Don Beyer (D-VA), who is pursuing a graduate degree in machine learning at George Mason University, announced the new working group recently.  He outlined ambitious goals for the group, including drafting, if not passing, as many as ten “major AI bills” in 2024.  Beyer noted that the task force will prioritize the bipartisan and bicameral Creating Resources for Every American To Experiment with Artificial Intelligence (CREATE AI) Act (S. 2714/H.R. 5077) to promote safe, innovative AI research in the United States.  He has personally sponsored or cosponsored several AI bills this Congress, including the AI Foundation Model Transparency Act (H.R. 6881), the Artificial Intelligence Environmental Impacts Act (H.R. 7197), the Federal Artificial Intelligence Risk Management Act of 2024 (H.R. 6936), and the Block Nuclear Launch by Autonomous Artificial Intelligence Act (H.R. 2894).

These bills are among the more than 30 comprehensive and targeted AI bills that members have introduced this Congress to foster transparency, protect against fake or misleading content, bolster national security, and otherwise promote AI leadership or regulate AI technology.

The creation of the House Task Force with bipartisan buy-in from leadership may signal renewed momentum on AI regulation this Congress.  Yet the prospect of comprehensive federal AI legislation passing through either chamber of Congress—much less becoming law—in a presidential election year remains uncertain, despite AI remaining a major priority for policymakers at all levels.  Executive agencies continue to implement the Biden Administration’s comprehensive executive order to promote responsible AI development, and we expect states to continue to adopt their own AI legislation, particularly as the technology advances. 

On 26 January 2024, the European Medicines Agency (EMA) announced that it has received a €10 million grant from the European Commission to support regulatory systems in Africa, and in particular for the setting up of the African Medicines Agency (AMA). Although still in its early stages as an agency, AMA shows significant promise to harmonize the regulatory landscape across the continent in order to improve access to quality, safe and efficacious medical products in Africa. Other key organizations who are working to establish and implement the vision set out for AMA include the African Union (AU), comprising of 55 member states in Africa, the African Union Development Agency (AUDA-NEPAD) and the World Health Organization (WHO). Of importance, AMA is expected to play an important role in facilitating intra-regional trade for pharmaceuticals in the context of the Africa Continental Free Trade Area (AfCFTA).

Background to AMA and medicines regulation in Africa

Africa currently has limited harmonization of medicines regulation between jurisdictions. The functionality and regulatory capacity of national medicines regulatory authorities varies significantly. For example, many national regulators lack the technical expertise to independently assess innovative marketing authorization applications and instead adopt “reliance” procedures, whereby authorization by a foreign stringent regulatory authority or registration as a WHO pre-qualified product may be a condition for approval. Pharmaceutical manufacturers seeking to conduct multinational clinical trials or launch their products across Africa can often face challenges when navigating the divergent requirements for each country (and can face additional delays during each approval process).

Multiple initiatives in the last decade have aimed to increase the harmonization of medicines regulation across Africa with varying degrees of success, such as:

Continue Reading EMA announces €10 million of funding to support the establishment of the African Medicines Agency

On February 9, the Third Appellate District of California vacated a trial court’s decision that held that enforcement of the California Privacy Protection Agency’s (“CPPA”) regulations could not commence until one year after the finalized date of the regulations.  As we previously explained, the Superior Court’s order prevented the CPPA from enforcing the regulations it finalized on March 29, 2023 until March 29, 2024.  However, the Appellate court held that “because there is no ‘explicit and forceful language’ mandating that the [CPPA] is prohibited from enforcing the [California Consumer Privacy Act (“CCPA”)] until (at least) one year after the [CPPA] approves final regulations, the trial court erred in concluding otherwise.” 

The Appellate court acknowledged that the CPPA failed to meet its statutory deadline (i.e., July 1, 2022) for adopting final regulations and that the statute provided an enforcement date of one year after this deadline, but nonetheless concluded that the CCPA does not require a “one-year delay” between the CPPA’s approval of a final regulation and the CPPA’s authority to enforce that regulation.  The Appellate court noted that there “are other tools” to protect relevant interests, such as the CPPA’s regulation that, in deciding to pursue an investigation, it will consider “all facts it determines to be relevant, including the amount of time between the effective date of the statutory or regulatory requirement(s) and the possible or alleged violation(s) of those requirements, and good-faith efforts to comply with those requirements.”

In a statement released by the CPPA shortly after the order, the Deputy Director of Enforcement for the CPPA said that “[t]his decision should serve as an important reminder to the regulated community: now would be a good time to review your privacy practices to ensure full compliance with all of our regulations.”

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

On the sidelines of November’s APEC meetings in San Francisco, Presidents Joe Biden and Xi Jinping agreed that their nations should cooperate on the governance of artificial intelligence. Just weeks prior, President Xi unveiled China’s Global Artificial Intelligence Governance Initiative to world leaders, the nation’s bid to put its stamp on the global governance of AI. This announcement came a day after the Biden Administration revealed another round of restrictions on the export of advanced AI chips to China.

China is an AI superpower. Projections suggest that China’s AI market is on track to exceed US$14 billion this year, with ambitions to grow tenfold by 2030. Major Chinese tech companies have unveiled over twenty large language models (LLMs) to the public, and more than one hundred LLMs are fiercely competing in the market.

Understanding China’s capabilities and intentions in the realm of AI is crucial for policymakers in the U.S. and other countries to craft effective policies toward China, and for multinational companies to make informed business decisions. Irrespective of political differences, as an early mover in the realm of AI policy and regulation, China can serve as a repository of pioneering experiences for jurisdictions currently reflecting on their policy responses to this transformative technology.

This article aims to advance such understanding by outlining key features of China’s emerging approach toward AI.

Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

Our December blog, examined the optimism at the end of last year that a way could be found out of the political deadlock that has paralysed the Northern Ireland Assembly for the last two years. As our blog noted, although those hopes did not materialize, the fact that the discussions had reached such an advanced stage suggested that a solution might be found in the New Year.  The announcement by the Democratic Unionist Party (DUP) on 30 January that a Deal had been reached seems to justify that optimism.

A Historical Recap

The 1998 Belfast Good Friday Agreement (GFA) brought an end to 30 years of ‘The Troubles’.  It struck a delicate balance between the competing interests of the Unionist and Nationalist communities in Northern Ireland.  Key to its success was the removal of border infrastructure between Northern Ireland and the Republic of Ireland, and the creation of a Power Sharing Executive (PSE) for Northern Ireland.  The PSE allocates the position of First Minister to the largest political party in Northern Ireland, and the position of Deputy First Minister to the second largest.  Other than the status implied by the titles, there is very little practical difference between the two roles.

Northern Ireland’s parliament, the Stormont Assembly, only actually sat for any extended period of time between 2007-2017, but, until the last set of elections, the DUP had always held the position of First Minister.

Brexit and Northern Ireland

Northern Ireland voted by 56:44 to remain in the EU in the 2016 Brexit referendum.  The Unionist community largely voted ‘Leave’, believing it would consolidate Northern Ireland’s position within the UK; the Nationalist community generally voted ‘Remain’ for the opposite reason. 

Continue Reading The DUP and The Deal: Power-Sharing Returns to N Ireland

U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level.  Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI.  This blog post summarizes key themes in state AI bills introduced in the past year.  Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead.

  • Notice Requirements:  A number of state AI bills focus on notice to individuals.  Some bills would require covered entities to notify individuals when using automated decision-making tools for decisions that affect their rights and opportunities, such as the use of AI in employment.  For example, the District of Columbia’s “Stop Discrimination by Algorithms Act” (B 114) would require a notice about how the covered entity uses personal information in algorithmic eligibility determinations, including providing information about the source of information, and it would require a separate notice to an individual affected by an algorithmic eligibility determination that results in an “adverse action.”  Similarly, the Massachusetts “Act Preventing a Dystopian Work Environment” (HB 1873) likewise would require employers or vendors using an automated decision system to provide notice to workers prior to adopting the system and would require an additional notice if there are “significant updates or changes” made to the system.  Additionally, other AI bills have focused on disclosure requirements between entities in the AI ecosystem.  For example, Washington’s legislature is considering a bill (HB 1951) that would require developers of automated decision tools to provide documentation of the “known limitations” of the tool, the types of data used to program or train the tool, and how the tool was evaluated for validity to deployers of the tool.
  • Impact Assessments:  Another key theme in state AI bills focuses on requirements for impact assessments in the development of AI tools; calls for these assessments aim to mitigate potential discrimination, privacy, and accuracy harms.  For example, a Vermont bill (HB 114) would require employers using automated decision-making tools to conduct algorithmic impact assessments prior to using those tools for employment-related decisions.  Additionally, the bill mentioned above under consideration in the Washington legislature (HB 1951) would require that deployers complete impact assessments for automated decision tools that include, for example, assessments of reasonably foreseeable risks of algorithmic decision making and the safeguards implemented.
  • Individual Rights:  State legislatures also have sought to implement requirements for consumers to exercise certain rights in AI bills.  For example, several state AI bills would establish an individual right to opt-out of decisions based on automated decision-making or request a human reevaluation of such decisions.  California (AB 331) and New York (AB 7859) are considering bills that would require AI deployers to allow individuals to request “alternative selection processes” where an automated decision tool is being used to make, or is a controlling factor in, a consequential decision.  Similarly, New York’s AI Bill of Rights (S 8209) would provide individuals with the right to opt-out of  the use of automated systems in favor of a human alternative. 
  • Licensing & Registration Regimes:  A handful of state legislatures have proposed requirements for AI licensing and registration.  For example, New York’s Advanced AI Licensing Act (A 8195) would require all developers and operators of certain “high-risk advanced AI systems” to apply for a license from the state before use.  Other bills require registration for certain uses of the AI system.  For instance, an amendment introduced in the Illinois legislature (HB 1002) would require state certification of diagnostic algorithms used by hospitals.
  • Generative AI & Content Labeling:  Another prominent theme in state AI legislation has been a focus on labeling content produced by generative AI systems.  For example, Rhode Island is considering a bill (H 6286) that would require a “distinctive watermark” to authenticate generative AI content.

We will continue to monitor these and related developments across our blogs.

From February 17, 2024, the Digital Services Act (“DSA”) will apply to providers of intermediary services (e.g., cloud services, file-sharing services, search engines, social networks and online marketplaces). These entities will be required to comply with a number of obligations, including implementing notice-and-action mechanisms, complying with detailed rules on terms and conditions, and publishing transparency reports on content moderation practices, among others. For more information on the DSA, see our previous blog posts here and here.

As part of its powers conferred under the DSA, the European Commission is empowered to adopt delegated and implementing acts* on certain aspects of implementation and enforcement of the DSA. In 2023, the Commission adopted one delegated act on supervisory fees to be paid by very large online platforms and very large online search engines (“VLOPs” and “VLOSEs” respectively), and one implementing act on procedural matters relating to the Commission’s enforcement powers. The Commission has proposed several other delegated and implementing acts, which we set out below. The consultation period for these draft acts have now passed, and we anticipate that they will be adopted in the coming months.

Pending Delegated Acts

  • Draft Delegated Act on Conducting Independent Audits. This draft delegated act defines the steps that designated VLOPs and VLOSEs will need to follow to verify the independence of the auditors, particularly setting the rules for the procedures, methodology and templates used. According to the draft delegated act, designated VLOPS and VLOSEs should be subject to their first audit at the latest 16 months after their designation. The consultation period for this draft delegated act ended on June 2, 2023.
  • Draft Delegated Act on Data Access for Research. This draft delegated act specifies the conditions under which vetted researchers may access data from VLOPs and VLOSEs. The consultation period for this draft delegated act ended on May 31, 2023.
Continue Reading Draft Delegated and Implementing Acts Pursuant to the Digital Services Act

New Jersey and New Hampshire are the latest states to pass comprehensive privacy legislation, joining CaliforniaVirginiaColoradoConnecticutUtahIowaIndiana, Tennessee, Montana, OregonTexasFlorida, and Delaware.  Below is a summary of key takeaways. 

New Jersey

On January 8, 2024, the New Jersey state senate passed S.B. 332 (“the Act”), which was signed into law on January 16, 2024.  The Act, which takes effect 365 days after enactment, resembles the comprehensive privacy statutes in Connecticut, Colorado, Montana, and Oregon, though there are some notable distinctions. 

  • Scope and Applicability:  The Act will apply to controllers that conduct business or produce products or services in New Jersey, and, during a calendar year, control or process either (1) the personal data of at least 100,000 consumers, excluding personal data processed for the sole purpose of completing a transaction; or (2) the personal data of at least 25,000 consumers where the business derives revenue, or receives a discount on the price of any goods or services, from the sale of personal data. The Act omits several exemptions present in other state comprehensive privacy laws, including exemptions for nonprofit organizations and information covered by the Family Educational Rights and Privacy Act.
  • Consumer Rights:  Consumers will have the rights of access, deletion, portability, and correction under the Act.  Moreover, the Act will provide consumers with the right to opt out of targeted advertising, the sale of personal data, and profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.  The Act will require controllers to develop a universal opt out mechanism by which consumers can exercise these opt out rights within six months of the Act’s effective date.
  • Sensitive Data:  The Act will require consent prior to the collection of sensitive data. “Sensitive data” is defined to include, among other things, racial or ethnic origin, religious beliefs, mental or physical health condition, sex life or sexual orientation, citizenship or immigration status, status as transgender or non-binary, and genetic or biometric data.  Notably, the Act is the first comprehensive privacy statute other than the California Consumer Privacy Act to include financial information in its definition of sensitive data.  The Act defines financial information as an “account number, account log-in, financial account, or credit or debit card number, in combination with any required security code, access code, or password that would permit access to a consumer’s financial account.”
  • Opt-In Consent for Certain Processing of Personal Data Concerning Teens:  Unless a controller obtains a consumer’s consent, the Act will prohibit the controller from processing personal data for targeted adverting, sale, or profiling where the controller has actual knowledge, or willfully disregards, that the consumer is between the ages of 13 and 16 years old.
  • Enforcement and Rulemaking:  The Act grants the New Jersey Attorney General enforcement authority.  The Act also provides controllers with a 30-day right to cure for certain violations, which will sunset eighteen months after the Act’s effective date.  Like the comprehensive privacy laws in California and Colorado, the Act authorizes rulemaking under the state Administrative Procedure Act.  Specifically, the Act requires the Director of the Division of Consumer Affairs in the Department of Law and Public Safety to promulgate rules and regulations pursuant to the Administrative Procedure Act that are necessary to effectuate the Act’s provisions.  
Continue Reading New Jersey and New Hampshire Pass Comprehensive Privacy Legislation

On January 16, the attorneys general of 25 states – including California, Illinois, and Washington – and the District of Columbia filed reply comments to the Federal Communication Commission’s (FCC) November Notice of Inquiry on the implications of artificial intelligence (AI) technology for efforts to mitigate robocalls and robotexts. 

The Telephone Consumer Protection Act (TCPA) limits the conditions under which a person may lawfully make a telephone call using “an artificial or prerecorded voice.”  The reply comments call on the FCC to take the position that “any type of AI technology that generates a human voice should be considered an ‘artificial voice’ for purposes of the [TCPA].”  They further state that a more permissive approach would “act as a ‘stamp of approval’ for unscrupulous businesses seeking to employ AI technologies to inundate consumers with unwanted robocalls for which they did not provide consent[], all based on the argument that the business’s advanced AI technology acts as a functional equivalent of a live agent.”

On 24 January 2024, the European Commission (the “Commission”) published its European Economic Security Package (the “EESP”), which included the long-awaited proposal to reform the EU Regulation which established a framework for Foreign Direct Investment screening (the “EU FDI Regulation”). The EESP’s proposed regulation (the “Proposed Regulation”) is one of the EESP’s five initiatives to implement the European Security Strategy (published in June 2023) – for an overview of the EESP, see our Global Policy Watch blog.

The Proposed Regulation seeks to improve the legal framework for foreign investment screening in the European Union and builds upon feedback that the Commission received during its public consultation in 2023. If adopted as proposed, it will significantly change the landscape of foreign investment screening regimes across the EU (for a full report of the public consultation see here).

This blog highlights the key changes under the proposed reform and analyses their impact on global deal making. We also provide an outlook on the next steps for the proposals.

Key takeaways and comment

  • Extended scope to include indirect foreign investments through EU subsidiaries and greenfield investments.
  • Minimum standards and greater harmonisation across the EU.
  • Introduction of call-in powers to review all transactions for at least 15 months after completion.
  • Coordinated submission of foreign investment filings in multi-country transactions.
  • Focus cooperation between Member States and the Commission on cases more likely to be sensitive.
  • More prescriptive guidance on substantive assessments and remedies, including a formal obligation for national screening authorities to prohibit or impose conditions on transactions they conclude are likely to negatively affect security or public order in one or more Member States.
  • Increased reporting, while protecting confidential information.
Continue Reading Draft EU Screening Regulation – a new chapter for screening foreign direct investments in the EU