Technology companies will be in for a bumpy ride in the second Trump Administration. President-elect Trump has promised to adopt policies that will accelerate the United States’ technological decoupling from China. However, he will likely take a more hands-off approach to regulating artificial intelligence and reverse several Biden Administration policies related to AI and other emerging technologies.Continue Reading Tech Policy in a Second Trump Administration: AI Promotion and Further Decoupling from China
Artificial Intelligence (AI)
October 2024 Developments Under President Biden’s AI Executive Order
This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “AI EO”), issued by President Biden on October 30, 2023. The first blog summarized the AI EO’s key provisions and…
Continue Reading October 2024 Developments Under President Biden’s AI Executive OrderTexas Legislature to Consider Sweeping AI Legislation in 2025
On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders. Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025. Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.” Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.
Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May. There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session. In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.
Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:
Lower Thresholds for “High-Risk AI.” Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act. First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act. Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025
California Enacts Health AI Bill and Protections for Neural Data
On September 28, California’s governor signed a number of bills into law, including to regulate health care facilities’ use of artificial intelligence (“AI”). This included AB 3030, which regulates certain California-licensed health care facilities’ use of AI and SB 1223, which amends the California Consumer Privacy Act (CCPA)…
Continue Reading California Enacts Health AI Bill and Protections for Neural DataThe EU Considers Changing the EU AI Liability Directive into a Software Liability Regulation
Now that the EU Artificial Intelligence Act (“AI Act”) has entered into force, the EU institutions are turning their attention to the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the so-called “AI Liability Directive”). Although the EU Parliament and the Council informally agreed on the text of the proposal in December 2023 (see our previous blog posts here and here), the text of the proposal is expected to change based on a complementary impact assessment published by the European Parliamentary Research Service on September 19.
Brief Overview of the AI Liability Directive
The AI Liability Directive was proposed to establish harmonised rules in fault-based claims (e.g., negligence). These were to cover the disclosure of evidence on high-risk artificial intelligence (“AI”) systems and the burden of proof including, in certain circumstances, a rebuttable presumption of causation between the fault of the defendant (i.e., the provider or deployer of an AI system) and the output produced by the AI system or the failure of the AI system to produce an output.
Potential Changes to the AI Liability Directive
In July, news reports leaked a slightly amended version of the European Commission’s AI Liability Directive proposal to align the wording with the adopted AI Act (Council document ST 12523 2024 INIT). The amendments reflect the difference in numbering between the proposed AI Act and the enacted version.
Over the summer, the EU Parliamentary Research Service carried out a complementary impact assessment to evaluate whether the AI Liability Directive should remain on the EU’s list of priorities. In particular, the new assessment was to determine whether the AI Liability Directive is still needed in light of the proposal for a new Product Liability Directive (see our blog post here).Continue Reading The EU Considers Changing the EU AI Liability Directive into a Software Liability Regulation
State Courts Dismiss Claims Involving the Use of Revenue Management Software in Residential Rental and Health Insurance Industries
In the past several months, two state courts in the District of Columbia and California decided motions to dismiss in cases alleging that the use of certain revenue management software violated state antitrust laws in the residential property rental management and health insurance industries. In both industries, parallel class actions…
Continue Reading State Courts Dismiss Claims Involving the Use of Revenue Management Software in Residential Rental and Health Insurance IndustriesCalifornia Governor Vetoes AI Safety Bill
On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems. SB 1047’s sweeping AI safety and security…
Continue Reading California Governor Vetoes AI Safety BillEvery Quarter, On the Quarter: BIS Proposes New Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Possession of Large-Scale Computing Clusters
In a new post on the Inside Government Contracts blog, our colleagues discuss new reporting requirements by the Department of Commerce, Bureau of Industry and Security for the development of advanced AI models and possession of large-scale computing clusters.
Continue Reading Every Quarter, On the Quarter: BIS Proposes New Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Possession of Large-Scale Computing ClustersHealthcare Technology Company Settles Texas Attorney General Allegations Regarding Accuracy of Generative AI Products
On September 18, 2024, the Texas Office of the Attorney General (“OAG”) announced that it reached “a first-of-its-kind settlement with a Dallas-based artificial intelligence healthcare technology called Pieces Technologies” (“Pieces”) to resolve “allegations that the company deployed its products at several Texas hospitals after making a series of false and…
Continue Reading Healthcare Technology Company Settles Texas Attorney General Allegations Regarding Accuracy of Generative AI ProductsCalifornia Legislature Passes Landmark AI Safety Legislation
On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI. The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.” Governor Gavin Newsom (D) has until September 30 to sign or veto the bill.
If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action. In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety.
Covered Models. SB 1047 establishes a two-part definition of “covered models” subject to its safety and security requirements. First, prior to January 1, 2027, covered models are defined as AI models trained using a quantity of computing power that is both greater 1026 floating-point operations per second (“FLOPS”) and valued at more than $100 million. This computing threshold mirrors the AI EO’s threshold for dual-use foundation models subject to red-team testing and reporting requirements; the financial valuation threshold is designed to exclude models developed by small companies. Similar to the Commerce Department’s discretion to adjust the AI EO’s computing threshold, California’s Government Operations Agency (“GovOps”) may adjust SB 1047’s computing threshold after January 1, 2027. By contrast, GovOps may not adjust the valuation threshold, which is indexed to inflation and must be “reasonably assessed” by the developer “using the average market prices of cloud compute at the start of training.”Continue Reading California Legislature Passes Landmark AI Safety Legislation