Technology

From January to June 2025, Poland will hold the Presidency of the Council of the European Union, presenting an ambitious agenda organized around the concept of security to tackle some of the EU’s most pressing challenges. This blog outlines the announced focus areas for technology, trade, defense, and ESG. Each of these topics is pivotal to ensuring the EU’s competitiveness, resilience, and sustainability in an increasingly complex global landscape.

Technology: Driving Innovation and Digital Transformation

The EU’s technological landscape is at a crossroads, driven by competition with the U.S. and China, and regulatory reforms such as the Digital Markets Act and the AI Act. The Polish Presidency will advance digital resilience by focusing on cybersecurity and AI governance. It commits to “promote the strengthening of European AI research, development and competence centres across the EU and support EU activities for entrepreneurs implementing disruptive technologies.” Poland also pledges to develop a “a comprehensive and horizontal approach to cybersecurity” by holding “a discussion on best practices in Member States on investing in cybersecurity” and creating a “new EU cybersecurity strategy.”

The EU-U.S. Trade and Technology Council (TTC), which has facilitated transatlantic cooperation, faces uncertain prospects under evolving political landscapes. If disbanded, new bilateral arrangements like a UK-EU TTC may emerge. In technology diplomacy, the EU will likely prioritize collaborations on export control, investment screening, and dual-use technologies with allies​, including the U.S.

Trade: Enhancing Competitiveness and Reducing Dependencies

The EU’s trade policy faces heightened complexities in balancing openness with economic security. Amidst Russia’s destabilizing actions and the economic decoupling from China, the Polish Presidency prioritizes reinforcing the EU’s economic sovereignty. Enhancements to the EU Customs Union and trade components of the Association Agreements with Ukraine and Moldova are expected, aligning economic cooperation with strategic resilience.Continue Reading “Security, Europe!” Priorities of the Polish Presidency of the EU Council

As the world anticipates the return of Donald Trump to the White House, the European Union (“EU”) braces for significant impacts in various sectors. The first Trump administration’s approach to transatlantic relations was characterized by unpredictability, tariffs on imported goods, a strained NATO relationship, and withdrawal from the Iran nuclear deal and the Paris climate agreement. If past is prologue, the EU must prepare for a renewed era of uncertainty and potential adversarial policies.

Trade Relations

Trump’s self-proclaimed identity as a “tariff man” suggests that trade policies would once again be at the forefront of his administration’s priorities. His campaign promises, which include imposing global tariffs on all goods from all countries in the range of 10 % to 20%, signal a departure from traditional U.S. trade policies. Such measures could have severe repercussions for the EU, both directly through increased tariffs on its exports and indirectly via an influx of dumped products from other affected nations, particularly China. Broad-based tariffs of this nature would likely provoke retaliatory measures from the EU.

The EU’s response toolkit would likely mirror many of the actions it employed between 2018 and 2020 in reaction to U.S. tariffs imposed during the first Trump administration. These measures would include retaliation on U.S. products to maximize political pressure by targeting Trump-supporting constituencies, pursuing chosen legal challenges against the U.S. at the World Trade Organization, and implementing safeguards to shield the EU market from an influx of Chinese and other diverted goods following U.S. tariff hikes. Very practically, the EU has suspended tariffs on US exports of steel and aluminum to its market worth €2.8 billion. The suspension expires on 1 March 2025, requiring an active decision on whether to reintroduce them or not.

In executing these measures, the EU is expected to collaborate with allies such as the UK, Canada, Japan, Australia, and South Korea to amplify its response. The EU may also explore smaller trade agreements or informal “packages” with the U.S. as part of a negotiated tariff truce. Broader protective measures could also be pursued, focusing on subsidies and industrial policies aimed at strengthening Europe’s strategic sectors, beyond actions specific to the U.S. Some cooperation with the U.S. on China may also be possible in areas like export control, investment control, and dual-use technologies.Continue Reading Policy Implications for Europe Under a Second Trump Administration

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.      Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

With U.S. President Trump returning to the White House, we expect the regulatory landscape facing technology and communications companies to shift significantly, if not uniformly. 

On the one hand, media and telecommunications companies that have long been regulated heavily by the FCC can likely expect a more deregulatory environment than they have experienced under the Biden Administration (with potential caveats).  On the other, large technology companies, which have largely avoided heavy-handed regulation, can expect to face a more active regulatory environment aimed at limiting or preventing content moderation decisions that the incoming Administration has characterized as “censorship” of conservative viewpoints.  Meanwhile, bipartisan priorities—such as the commitment to ensuring national security in the telecommunications sector—will likely continue to be a major focus of regulatory agencies.  While the assessments of regulatory risks and opportunities will continue to be refined and updated as the next Trump administration takes shape, we highlight here a few trends that are likely to influence policy and regulation at the FCC over the next four years.

Changes in Regulation:  Deregulation for Some, Greater Scrutiny for Others

FCC Commissioner Brendan Carr, who is the frontrunner to be named the next Chair of the FCC, has a long history of public statements supporting deregulation of the industries historically regulated by the FCC.  For instance, Carr has observed in the past that “rapidly evolving market conditions counsel in favor of eliminating many of the heavy-handed FCC regulations that were adopted in an era when every technology operated in a silo.”  This likely means that we can expect to see a Republican-led FCC seeking opportunities to loosen regulations on broadcasters, the pay TV industry, and internet service providers, ranging the gamut from reform of broadcast licensee ownership restrictions to repealing (or supporting the court reversal of) the Biden-era net neutrality order.

However, other industries under the FCC’s umbrella may face greater scrutiny.  In particular, we anticipate that the FCC’s interest in national security policymaking will continue to grow, as Commissioner Carr has highlighted issues such as curbing the influence of foreign nations on social media platforms and expanding the FCC’s list of providers of communications equipment and services that pose an unacceptable risk to the national security of the U.S.  This interest could expand beyond traditional telecommunications providers to other technology enterprises, such as those that offer high-powered cloud computing services to customers in China and elsewhere. Continue Reading Likely Trends in U.S. Tech and Media Regulation Under the New Trump Administration

In the past several months, two state courts in the District of Columbia and California decided motions to dismiss in cases alleging that the use of certain revenue management software violated state antitrust laws in the residential property rental management and health insurance industries.  In both industries, parallel class actions

Continue Reading State Courts Dismiss Claims Involving the Use of Revenue Management Software in Residential Rental and Health Insurance Industries

The European Court of Justice released its long-awaited judgment1 in the Google Shopping saga last week, finally putting to bed close to fifteen years’ of scrutiny into Google’s practices of favouring its own comparison shopping service (Google Shopping) over rival shopping services.

In its ruling, the ECJ upheld the General Court’s earlier judgment2 which had rejected Google’s appeal over the European Commission’s decision3 to fine it €2.42 billion for abusing its market dominance as a search engine by systematically favouring Google Shopping in its general search results.

The overall outcome of the ECJ’s reasoning in Google Shopping is perhaps unsurprising to competition law practitioners – given the unwavering direction of travel of the case. The ECJ judgment nevertheless raises a number of interesting points and leaves a number of questions unanswered.

Key takeaways

  • Refusal to supply. The judgment confirmed that not every issue of access necessarily requires the application of the Bronner test of refusal to supply. The ECJ found the Bronner doctrine applies in circumstances where a dominant firm refuses to grant a competitor access to infrastructure which it has developed for its own business needs. However, the ECJ ruled that the Bronner test is not applicable in cases where there is no outright refusal of access to infrastructure – but rather access granted on discriminatory terms (such discrimination being assessed under separate forms of potential abuse).
  • Competition not on the merits. The ECJ accepted Google’s arguments that, to establish an abuse of dominance under Article 102, a two-pronged test applies: (i) that actual or potential anticompetitive effects arise from the abusive conduct; and (ii) that the conduct falls outside of “competition on the merits”. However, in assessing the latter requirement, the ECJ rejected Google’s arguments that only circumstances relating specifically to Google’s conduct are relevant to the assessment. Instead, the ECJ held that, in assessing “competition on the merits”, relevant circumstances regarding the characteristics of the market or the nature of competition are capable of characterising the conduct as falling outside of the scope of competition on the merits.
  • Causality and counterfactual. The ECJ maintained that the causal link is one of the essential elements of a competition law infringement and that, as a result, the burden of proof for such causal link (and hence the counterfactual analysis) lies with the Commission. However, the ECJ found that the counterfactual analysis is just one way to establish causality. Where establishing a credible counterfactual may be “arbitrary or even impossible” (para 231), the Commission cannot be required to systematically establish a counterfactual and can rely on other evidence to establish causality.
  • “As-efficient competitors”. The ECJ reiterated earlier case law that it is not the objective of Article 102 to ensure that less efficient competitors remain on the market but also remarked that this statement did not imply that an abuse of dominance finding does not always require a showing that the conduct was capable of excluding an as-efficient competitor. With respect to the AEC test, the Court held that this is just one way to establish an abuse of dominance.

Continue Reading ECJ’s Google Shopping Judgment: The End of a Long Saga

On July 18, 2024, the President of the European Commission, Ursula von der Leyen, was reconfirmed by the European Parliament for a second five-year term. As part of the process, she delivered a speech before the Parliament, complemented by a 30-page program, which outlines the Commission’s political guidelines and

Continue Reading The Future of EU Defence Policy and a Renewed Focus on Technology Security

This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors.  This is a fast-growing area that is seeing significant levels of public and private investment activity.  We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.

Quantum Computing

Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers.  Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”).  The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers. 

The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”).  However, advances in quantum computing may also lead to some risks, the most significant being to data protection.  Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data. 

This is a rapidly developing area that governments are only just turning their attention to.  Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

On the sidelines of November’s APEC meetings in San Francisco, Presidents Joe Biden and Xi Jinping agreed that their nations should cooperate on the governance of artificial intelligence. Just weeks prior, President Xi unveiled China’s Global Artificial Intelligence Governance Initiative to world leaders, the nation’s bid to put its stamp on the global governance of AI. This announcement came a day after the Biden Administration revealed another round of restrictions on the export of advanced AI chips to China.

China is an AI superpower. Projections suggest that China’s AI market is on track to exceed US$14 billion this year, with ambitions to grow tenfold by 2030. Major Chinese tech companies have unveiled over twenty large language models (LLMs) to the public, and more than one hundred LLMs are fiercely competing in the market.

Understanding China’s capabilities and intentions in the realm of AI is crucial for policymakers in the U.S. and other countries to craft effective policies toward China, and for multinational companies to make informed business decisions. Irrespective of political differences, as an early mover in the realm of AI policy and regulation, China can serve as a repository of pioneering experiences for jurisdictions currently reflecting on their policy responses to this transformative technology.

This article aims to advance such understanding by outlining key features of China’s emerging approach toward AI.Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence