On May 16, the U.S. Securities and Exchange Commission (“SEC”) adopted amendments to Regulation S-P, which implements the Gramm-Leach Bliley Act (“GLBA”) for SEC-regulated entities such as broker-dealers, investment companies, registered investment advisers, and transfer agents.

Among other requirements, the amendments require SEC-regulated entities to adopt written policies and procedures for an incident response program that is “reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information.”  Under the required incident response program, SEC-regulated entities must provide timely notification to individuals whose sensitive customer information was, or is reasonably likely to have been, accessed or used without authorization.  Other provisions address record keeping, annual privacy notices, and oversight of service providers, as well as expanding the scope of financial institutions and “customer information” covered by the rule.

The SEC had previously issued a proposed rule for comment in the Federal Register in April 2023.  Industry representatives raised a number of concerns with the rule, including conflicts between the proposed rule and state data breach laws and a lack of consistency with the safeguarding standards promulgated by other federal prudential regulators.  Despite these concerns, the final rule is substantially as proposed and reflects only minor revisions.  For example, the following changes have been made to the notification provisions of the final rule:

  • Clarification that the requirement does not apply in cases where a SEC-regulated entity reasonably determines that a specific individual’s sensitive customer information was not accessed or used without authorization.
  • Broadening the scope and timing requirements of the so-called “law enforcement exception” to allow delays in providing notifications where the Attorney General determines that notice would pose a substantial risk to public safety, in addition to national security.
  • No longer requiring that notifications include “what has been done to protect the sensitive customer information from further unauthorized access or use” given the risk that this information could advantage threat actors.

The final rule will become effective 60 days after publication in the Federal Register.

As the energy transition gathers pace, the need for access to the essential raw materials which underpin it, is also accelerating:

  • An electric car needs six times more rare earth minerals than a conventional vehicle;
  • An onshore wind plant needs nine times more materials than a comparable gas facility;
  • Between 2017 and 2022, the energy sector drove a tripling of global demand for lithium, whilst demand for cobalt and nickel rose by 70% and 40%[1] respectively;
  • Between three to 6.5 billion tonnes of transitional minerals will be needed over the next three decades if the world is to meet its climate goals[2].

The current and future global demand for transitional metals and minerals offers a potentially huge economic opportunity[3]. This is particularly the case for Africa, where more than 50% of the world’s cobalt and manganese, 92% of its platinum and significant quantities of lithium and copper are to be found. Almost all of the continent’s current output is presently shipped as ore for processing in third countries, meaning the potential economic benefit of this enormous mineral wealth has not filtered through to the real economics in its African source countries[4].  Africa exports roughly 75% of its crude oil, which is refined elsewhere and re-imported as (more expensive) petroleum products; and exports 45% of its natural gas, whilst 600 million Africans have no access to electricity (approximately 53% of the continent’s population)[5].

A number of African governments have expressed their determination to avoid repeating the ‘resource curse’ mistakes of the past, by using the continent’s natural resources to drive domestic economic growth, while creating meaningful domestic job opportunities, rather than exporting them and the consequent economic growth elsewhere.  This approach has led a number of African countries to impose export restrictions on raw minerals; promote domestic processing; and demand that agreements with third countries promote technology transfers and improve domestic processing capacities and workforce skills.

Sustainable use of transition minerals

A resolution to promote equitable benefit-sharing from extraction was recently presented at the UN environmental assembly in Nairobi calling for the sustainable use of transitional minerals[6].  The Resolution, which was supported by a group of mainly African countries including the DRC, Senegal, Burkina Faso, Cameroon and Chad, was described as being ‘crucial for African countries, the environment and the future of [African nations’] populations.”

A number of African countries have already taken steps to protect their natural resources and move up the processing value chain[7].

Continue Reading African Raw Material Export Bans: Protectionism or Self-Determination?

Nearly a year after Senate Majority Leader Chuck Schumer (D-NY) launched the SAFE Innovation Framework for artificial intelligence (AI) with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), the bipartisan group has released a 31-page “Roadmap” for AI policy.  The overarching theme of the Roadmap is “harnessing the full potential of AI while minimizing the risks of AI in the near and long term.”

In contrast to Europe’s approach to regulating AI, the Roadmap does not propose or even contemplate a comprehensive AI law.  Rather, it identifies key themes and areas of agreement and directs the relevant congressional committees of jurisdiction to legislate on key issues.  The Roadmap recommendations are informed by the nine AI Insight Forums that the bipartisan group convened over the last year.

  • Supporting U.S. Innovation in AI.  The Roadmap recommends least $32 billion in funding per year for non-defense AI innovation, and the authors call on the Appropriations Committee to “develop emergency appropriations language to fill the gap between current spending levels and the [National Security Commission on AI (NSCAI)]-recommended level,” suggesting the bipartisan group would like to see Congress increase funding for AI as soon as this year. The funding would cover a host of purposes, such as AI R&D, including AI chip design and manufacture; funding the outstanding CHIPS and Science Act accounts that relate to AI; and AI testing and evaluation at NIST.
    • This pillar also endorses the bipartisan Creating Resources for Every American to Experiment with Artificial Intelligence (CREATE AI) Act (S. 2714), which would broaden nonprofit and academic researchers’ access to AI development resources including computing power, datasets, testbeds, and training through a new National Artificial Intelligence Research Resource.  The Roadmap also supports elements of the Future of AI Innovation Act (S. 4178) related to “grand challenge” funding programs, which aim to accelerate AI development through prize competitions and federal investment initiatives.
    • The bipartisan group recommends including funds for the Department of Defense and DARPA to address national security threats and opportunities in the emergency funding measure.  
  • AI and the Workforce.  The Roadmap recommends committees of jurisdiction consider the impact of AI on U.S. workers and ensure that working Americans benefit from technological progress, including through training programs and by studying the impacts of AI on workers.  Importantly, the bipartisan group recommends legislation to “improve the U.S. immigration system for high-skilled STEM workers.”  The Roadmap does not address benefit programs for displaced workers.
Continue Reading Bipartisan Senate AI Roadmap Released

On 23 May 2024, the EU’s Critical Raw Materials Act (“CRMA”) entered into force.  The Regulation’s adoption within just one year after it was first proposed in March 2023 signals the EU’s political commitment to strengthen Europe’s strategic autonomy on the supply of Strategic Raw Materials (“SRMs”) and the broader category of “Critical Raw Materials” (“CRMs”).   

Here are the key takeaways for companies:

  • The CRMA sets non-binding capacity targets within the EU for the extraction, processing, refining, and recycling of SRMs that are key to achieve the green and digital transition.
  • To reach such targets, the CRMA empowers the European Commission (“the Commission”) to recognize projects that extract, process, refine or recycle SRMs, including projects outside the EU, as Strategic Projects (“SPs”) so that they may benefit from easier access to financing, expedited permitting process, and matchmaking with off-takers.  The Commission is expected to recognize the first SPs by the end of 2024.
  • The Commission must monitor disruption risks and propose mitigation measures, if needed, to ensure a secure supply of CRMs.  To enable the Commission to do this effectively, companies may be subject to new specific obligations, such as participating in surveys, carrying out risk assessments of SRMs supply chains, mitigating possible vulnerabilities, reporting on the implementation and the financing of their SPs, labelling some products, and recycling a minimum content of permanent magnets.  
  • The Commission will also create and operate a Joint Purchasing Mechanism to aggregate the demand of interested EU off-takers consuming SRMs and seek offers from suppliers to match that aggregated demand.

Critical and Strategic Raw Materials and Capacity Targets

SRMs are indispensable raw materials for strategic sectors that facilitate transition to a greener, digital economy.  They are characterized by high forecasted demand growth and significant challenges in scaling up production in Europe to meet such demand.  Annex I to the CRMA lists 17 SRMs, including copper, gallium, lithium, manganese, and titanium metal.

Continue Reading The EU Critical Raw Materials Act enters into force

Executive Summary

  • President Luiz Inácio Lula da Silva’s administration has been making announcements and adopting actions that signal conflicting economic policy directions, and that might indicate a potential shift towards State capitalism-type rather than free market and free enterprise policies.
  • After its return to democracy in 1985, Brazil’s first attempt at State capitalism collapsed and resulted in a two-and-half-year, domestic policy-generated recession that reduced the country’s GDP by 8.1 percent between 2014 and 2016.
  • Policies and actions adopted by the Lula administration have some similarities with this first attempt, in particular when it comes to government intervention in large business conglomerates.  However, President Lula faces significant political and institutional constraints.
  • Structural and microeconomic reforms also pursued by the administration offer an opportunity for businesses and investors, but State capitalism-type policies increase risks of capital misallocation, government and market inefficiencies, and corruption.


As President Luiz Inácio Lula da Silva’s administration approaches its 18-month mark, federal government announcements and actions begin to signal a potential shift to move Brazil towards a State capitalism-type economy and reverse the free markets and free enterprise approach adopted by the past two administrations.  When seen in conjunction with the recently-approaved new fiscal framework and historic tax reform, these signals provide a mixed message to businesses and investors.  They point, at the same time, to more and less government intervention in markets.

Continue Reading Brazil’s State Capitalism Revisited: Mixed Signals to Businesses and Investors

In line with its previous decision-making practice (see our previous sustainability blog posts here and here), on 8 May 2024, the German Federal Cartel Office (“FCO”) declared the implementation of a new European industry standard for reusable pot plant trays compatible with competition law.

Since 2021, companies and associations from the European Green Sector had been working together to develop criteria for the design and handling of reusable plant trays and to develop a standardised industry solution – the reusable Euro Plant Tray – to reduce plastic waste. Today, more than 95% of plant trays on the market are available as one-way solutions. The reusable Euro Plant Tray is available for supplies from the producer via wholesalers to garden centers, DIY stores, and the retail trade (visit Euro Plant Tray eG’s website for further details).

The FCO determined that the industry standard did not raise any competition concerns because (i) the cooperation and information exchange between the market participants is limited to what is strictly necessary for the implementation and operation of the re-usable plant tray system, (ii) company-specific strategic data is collected by neutral third parties and made available to the participants only in an accumulated and aggregated manner, (iii) the participation in the Euro Plant Tray scheme is voluntary and open to any market participants from different levels in the value chain, and (iv) members of the Euro Plant Tray scheme remain free to use plant trays from other suppliers.

In its press release (only available in German), the FCO stresses that the ‘Euro Plant Tray’ sustainability initiative is another good example of the FCO’s guidance on how to ensure that sustainability initiatives are embedded in competition law. The case summary to be issued by the FCO in the near future likely will reveal additional details on the FCO’s assessment. Nevertheless, more general guidance and a better understanding of the relevant factors to be considered when assessing the compatibility of sustainability initiatives with competition law would be welcome. Hopefully, the currently debated 12th Amendment to the German Act against Restraints of Competition, which (inter alia) focuses on sustainability, will provide more clarity and legal certainty.

Although the final text of the EU AI Act should enter into force in the next few months, many of its obligations will only start to apply two or more years after that (for further details, see our earlier blog here). To address this gap, the Commission is encouraging industry to take early, voluntary steps to implement the Act’s requirements through an initiative it is calling the AI Pact. With the upcoming European elections on the horizon, the Commission on 6 May 2024 published additional details on the AI Pact and encouraged organizations to implement measures addressing “critical aspects of the imminent AI Act, with the aim of curbing potential misuse” and contributing “to a safe use of AI in the run-up to the election.”

What is the AI Pact?

The Commission launched the AI Pact in November 2023 with the objective of assisting organizations in planning ahead for compliance with the AI Act and encouraging early adoption of the measures outlined in the Act. Organizations involved in the AI Pact will make formal pledges to work towards compliance with the upcoming AI Act and provide specific details about the actions they are currently taking or planning to take to meet the Act’s requirements.

The AI Pact will be overseen by the Commission’s newly formed AI Office and will be structured around two pillars:

  • Pillar I: gathering and exchanging knowledge with the AI Pact network – organizations participating in the Pact contribute to the creation of a collaborative community, sharing their experiences and best practices. This will include workshops organized by the AI Office on topics including responsibilities under the AI Act and how to prepare for the Act’s implementation.
  • Pillar II: facilitating and communicating company pledges – “providers” and “deployers” of AI systems (as defined in the AI Act) will be encouraged to proactively share the concrete actions they’ve committed to take to meet the Act’s requirements and report on their progress on a regular basis. The commitments will be collected and published by the AI Office.

What does involvement in the AI Pact offer participants?

According to the Commission, the benefits for organizations participating in the AI Pact include:

  • Fostering a shared understanding of the AI Act’s goals.
  • Sharing knowledge and increasing the visibility and credibility of the safeguards put in place to demonstrate trustworthy AI.
  • Building additional trust in AI technologies.


The Covington team continues to monitor developments on the AI Act, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.

In the absence of congressional action on comprehensive artificial intelligence (AI) legislation, state legislatures are forging ahead with groundbreaking bills to regulate the rapidly advancing technology.  On May 8, the Colorado House of Representatives passed SB 205, a far-reaching and comprehensive AI bill, on a 41-22-2 vote.  The final vote comes just days after the state Senate’s passage of the bill on May 3, making Colorado the first state in the nation to send comprehensive AI legislation to its governor for signing.  While Governor Jared Polis (D) has not indicated whether he will sign or veto the bill, if SB 205 becomes law, it would establish a broad regulatory regime for developers and deployers of “high-risk” AI systems. 

High-risk AI systems, as defined by the bill, are AI systems that make, or play a substantial part in making, consequential decisions that affect consumers.  SB 205’s duties and requirements would aim to minimize risks of algorithmic discrimination, or differential treatment or impacts that disfavor individuals or groups based on protected classifications, resulting from the use of high-risk AI systems.

Algorithmic Discrimination Duty of Care.  SB 205 would impose a duty of reasonable care on developers and deployers of high-risk AI to protect consumers from algorithmic discrimination.  The bill, which would be exclusively enforced by the Colorado Attorney General, would also establish a rebuttable presumption that high-risk AI developers and deployers meet this duty to use reasonable care if they comply with the bill’s requirements.

AI Interaction Notices & Public Disclosures.  SB 205 would require entities that deploy, sell, or otherwise make available an AI system that is “intended to interact with consumers” to disclose to consumers that they are interacting with an AI system, unless obvious to a reasonable person.  The bill would also require all AI developers and deployers to issue public statements disclosing the types of high-risk AI systems they develop, modify, or deploy and how they manage algorithmic discrimination risks, with updates within 90 days after modifying any high-risk AI. 

High-Risk AI Developer Requirements.  High-risk AI developers would be required to disclose to deployers information related to harmful or inappropriate uses, training data and data governance measures, performance evaluations, algorithmic discrimination safeguards, and other aspects of high-risk AI systems, along with any other information required to conduct impact assessments or monitor a high-risk AI system’s performance for risks of algorithmic discrimination.  High-risk AI developers would also be required to disclose, to the Colorado Attorney General and all known deployers and developers of a high-risk AI system, any known or foreseeable risk of algorithmic discrimination arising from the high-risk AI system’s intended uses within 90 days after discovering that such algorithmic discrimination occurred.

High-Risk AI Deployer Requirements.  SB 205 would require high-risk AI deployers to implement a “risk management policy and program” for mitigating algorithmic discrimination, which must be regularly updated over a high-risk AI system’s life cycle and must be reasonable considering the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework or equivalent risk management frameworks.  High-risk AI deployers would also be required to conduct algorithmic discrimination impact assessments for each high-risk AI system in deployment and at least 90 days after such AI systems are substantially modified. 

Additionally, high-risk AI deployers would be required to notify consumers of the use of high-risk AI for consequential decisions that affect them, provide consumers with statements disclosing the high-risk AI system’s purposes, data, and components, and provide information regarding consumers’ rights to opt out of profiling for decisions with legal or similarly significant effects under the Colorado Privacy Act.  High-risk AI deployers would also be required to provide consumers with opportunities to (1) correct any incorrect personal data processed by the high-risk AI system and (2) appeal adverse consequential decisions arising from the use of a high-risk AI system, which must allow for human review if technically feasible.  Finally, high-risk AI deployers would also be obligated to disclose incidents of algorithmic discrimination to the Colorado Attorney General within 90 days of discovering the incident.

Comprehensive AI Bills in Perspective.  Colorado’s passage of SB 205 coincides with votes to advance comprehensive AI bills in two separate California legislative committees.  On April 23, the California Assembly Judiciary Committee voted 9-2 to pass AB 2930, a comprehensive AI bill that would regulate the use of automated decision tools.  Mirroring SB 205’s requirements for high-risk AI systems, AB 2930 would impose impact assessment, notice, and disclosure requirements on developers and deployers to mitigate algorithmic discrimination risks.  Also on April 23, the California Senate Government Organization Committee voted 11-0 to pass the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), followed by the Senate Appropriations Committee’s 7-2 vote in favor of that bill on May 6.  While Colorado’s SB 205 and California’s AB 2930 would regulate AI systems based on their use in consequential decision making and address risks of algorithmic discrimination, SB 1047 would regulate AI systems based on their technical capabilities and address risks to public safety. We are closely monitoring these and related state AI developments as they unfold.  A more detailed summary of California SB 1047 is available here, a summary of key themes in other recent state AI bills is available here, and our overview of recent state synthetic media and generative AI legislation is available here. Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Georgia Governor Brian Kemp has vetoed Georgia Senate Bill 368, which would have created a requirement in state law for certain “agents of foreign principals” to register and report certain lobbying and political activities in Georgia.  This is the first of the wave of recently proposed baby FARA bills at the state level, designed to mirror the federal Foreign Agents Registration Act, that made it to a state governor’s desk, and also the first to be vetoed.  In the Governor’s brief veto message, he wrote that “Senate Bill 368 would prohibit foreign nationals from making political contributions, which is already prohibited by federal law, and impose additional state-level registration requirements on agents of foreign principals, some of which were unintended by the bill’s sponsor.”  He indicated that the bill’s own sponsor had requested that he veto it.

The Georgia bill, like other proposed state-level baby FARA laws, could have had broad consequences (likely broader than intended) not just for foreign companies but also for U.S. subsidiaries of foreign companies, as well as nonprofits, academic institutions, religious institutions, and others because, unlike the federal FARA statute, it did not include major exemptions intended to carve out at least some entities from the obligation to register. Covington is continuing to track the growing wave of proposed baby FARA bills, including whether the bills in other states meet the same fate as the ill-fated Georgia bill.

On May 2, 2024, the Federal Communications Commission (FCC) released a draft Notice of Proposed Rulemaking (NPRM) for consideration at the agency’s May 23 Open Meeting that proposes to “prohibit from recognition by the FCC and participation in [its] equipment authorization program, any [Telecommunications Certification Body (TCB)] or test lab in which an entity identified on the Covered List has direct or indirect ownership or control.”  The NPRM also would also direct of FCC’s Office of Engineering and Technology to “suspend the recognition of any TCB or test lab directly or indirectly owned or controlled by entities identified on the Covered List, thereby preventing such entities from using their owned or controlled labs to undermine our current prohibition on Covered Equipment.”

The NPRM would seek comment on “whether and how the Commission should consider national security determinations made in other Executive Branch agency lists in establishing eligibility qualifications for FCC recognition of a TCB or a test lab in our equipment authorization program.”  It also would “propose that the prohibition would be triggered by direct or indirect ownership or control of 10% or more” and that “TCBs and test labs would be required to report any entity that holds a 5% or greater direct or indirect equity and/or voting interest.”  The NPRM would also “propose to collect additional ownership and control information from TCBs and test labs” in order to implement the proposed national security prohibition.

The proposal follows a number of other recent FCC actions undertaken to address national security concerns pertaining to communications networks and devices.  FCC Chairwoman Jessica Rosenworcel and Commissioner Brendan Carr recently announced their support for the proposal.