Technology

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.      Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

With U.S. President Trump returning to the White House, we expect the regulatory landscape facing technology and communications companies to shift significantly, if not uniformly. 

On the one hand, media and telecommunications companies that have long been regulated heavily by the FCC can likely expect a more deregulatory environment than they have experienced under the Biden Administration (with potential caveats).  On the other, large technology companies, which have largely avoided heavy-handed regulation, can expect to face a more active regulatory environment aimed at limiting or preventing content moderation decisions that the incoming Administration has characterized as “censorship” of conservative viewpoints.  Meanwhile, bipartisan priorities—such as the commitment to ensuring national security in the telecommunications sector—will likely continue to be a major focus of regulatory agencies.  While the assessments of regulatory risks and opportunities will continue to be refined and updated as the next Trump administration takes shape, we highlight here a few trends that are likely to influence policy and regulation at the FCC over the next four years.

Changes in Regulation:  Deregulation for Some, Greater Scrutiny for Others

FCC Commissioner Brendan Carr, who is the frontrunner to be named the next Chair of the FCC, has a long history of public statements supporting deregulation of the industries historically regulated by the FCC.  For instance, Carr has observed in the past that “rapidly evolving market conditions counsel in favor of eliminating many of the heavy-handed FCC regulations that were adopted in an era when every technology operated in a silo.”  This likely means that we can expect to see a Republican-led FCC seeking opportunities to loosen regulations on broadcasters, the pay TV industry, and internet service providers, ranging the gamut from reform of broadcast licensee ownership restrictions to repealing (or supporting the court reversal of) the Biden-era net neutrality order.

However, other industries under the FCC’s umbrella may face greater scrutiny.  In particular, we anticipate that the FCC’s interest in national security policymaking will continue to grow, as Commissioner Carr has highlighted issues such as curbing the influence of foreign nations on social media platforms and expanding the FCC’s list of providers of communications equipment and services that pose an unacceptable risk to the national security of the U.S.  This interest could expand beyond traditional telecommunications providers to other technology enterprises, such as those that offer high-powered cloud computing services to customers in China and elsewhere. Continue Reading Likely Trends in U.S. Tech and Media Regulation Under the New Trump Administration

In the past several months, two state courts in the District of Columbia and California decided motions to dismiss in cases alleging that the use of certain revenue management software violated state antitrust laws in the residential property rental management and health insurance industries.  In both industries, parallel class actions

Continue Reading State Courts Dismiss Claims Involving the Use of Revenue Management Software in Residential Rental and Health Insurance Industries

The European Court of Justice released its long-awaited judgment1 in the Google Shopping saga last week, finally putting to bed close to fifteen years’ of scrutiny into Google’s practices of favouring its own comparison shopping service (Google Shopping) over rival shopping services.

In its ruling, the ECJ upheld the General Court’s earlier judgment2 which had rejected Google’s appeal over the European Commission’s decision3 to fine it €2.42 billion for abusing its market dominance as a search engine by systematically favouring Google Shopping in its general search results.

The overall outcome of the ECJ’s reasoning in Google Shopping is perhaps unsurprising to competition law practitioners – given the unwavering direction of travel of the case. The ECJ judgment nevertheless raises a number of interesting points and leaves a number of questions unanswered.

Key takeaways

  • Refusal to supply. The judgment confirmed that not every issue of access necessarily requires the application of the Bronner test of refusal to supply. The ECJ found the Bronner doctrine applies in circumstances where a dominant firm refuses to grant a competitor access to infrastructure which it has developed for its own business needs. However, the ECJ ruled that the Bronner test is not applicable in cases where there is no outright refusal of access to infrastructure – but rather access granted on discriminatory terms (such discrimination being assessed under separate forms of potential abuse).
  • Competition not on the merits. The ECJ accepted Google’s arguments that, to establish an abuse of dominance under Article 102, a two-pronged test applies: (i) that actual or potential anticompetitive effects arise from the abusive conduct; and (ii) that the conduct falls outside of “competition on the merits”. However, in assessing the latter requirement, the ECJ rejected Google’s arguments that only circumstances relating specifically to Google’s conduct are relevant to the assessment. Instead, the ECJ held that, in assessing “competition on the merits”, relevant circumstances regarding the characteristics of the market or the nature of competition are capable of characterising the conduct as falling outside of the scope of competition on the merits.
  • Causality and counterfactual. The ECJ maintained that the causal link is one of the essential elements of a competition law infringement and that, as a result, the burden of proof for such causal link (and hence the counterfactual analysis) lies with the Commission. However, the ECJ found that the counterfactual analysis is just one way to establish causality. Where establishing a credible counterfactual may be “arbitrary or even impossible” (para 231), the Commission cannot be required to systematically establish a counterfactual and can rely on other evidence to establish causality.
  • “As-efficient competitors”. The ECJ reiterated earlier case law that it is not the objective of Article 102 to ensure that less efficient competitors remain on the market but also remarked that this statement did not imply that an abuse of dominance finding does not always require a showing that the conduct was capable of excluding an as-efficient competitor. With respect to the AEC test, the Court held that this is just one way to establish an abuse of dominance.

Continue Reading ECJ’s Google Shopping Judgment: The End of a Long Saga

On July 18, 2024, the President of the European Commission, Ursula von der Leyen, was reconfirmed by the European Parliament for a second five-year term. As part of the process, she delivered a speech before the Parliament, complemented by a 30-page program, which outlines the Commission’s political guidelines and

Continue Reading The Future of EU Defence Policy and a Renewed Focus on Technology Security

This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors.  This is a fast-growing area that is seeing significant levels of public and private investment activity.  We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.

Quantum Computing

Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers.  Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”).  The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers. 

The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”).  However, advances in quantum computing may also lead to some risks, the most significant being to data protection.  Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data. 

This is a rapidly developing area that governments are only just turning their attention to.  Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

On the sidelines of November’s APEC meetings in San Francisco, Presidents Joe Biden and Xi Jinping agreed that their nations should cooperate on the governance of artificial intelligence. Just weeks prior, President Xi unveiled China’s Global Artificial Intelligence Governance Initiative to world leaders, the nation’s bid to put its stamp on the global governance of AI. This announcement came a day after the Biden Administration revealed another round of restrictions on the export of advanced AI chips to China.

China is an AI superpower. Projections suggest that China’s AI market is on track to exceed US$14 billion this year, with ambitions to grow tenfold by 2030. Major Chinese tech companies have unveiled over twenty large language models (LLMs) to the public, and more than one hundred LLMs are fiercely competing in the market.

Understanding China’s capabilities and intentions in the realm of AI is crucial for policymakers in the U.S. and other countries to craft effective policies toward China, and for multinational companies to make informed business decisions. Irrespective of political differences, as an early mover in the realm of AI policy and regulation, China can serve as a repository of pioneering experiences for jurisdictions currently reflecting on their policy responses to this transformative technology.

This article aims to advance such understanding by outlining key features of China’s emerging approach toward AI.Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023

Technology companies are grappling with unprecedented changes that promise to accelerate exponentially in the challenging period ahead. We invite you to join Covington experts and invited presenters from around the world to explore the key issues faced by businesses developing or deploying cutting-edge technologies. These highly concentrated sessions are packed

Continue Reading Covington’s Fifth Annual Technology Forum – Looking Ahead: New Legal Frontiers for the Tech Industry