The Commerce Department today published a Request for Information (RFI) inviting the public to submit comments on U.S. artificial intelligence exports. The RFI asks stakeholders to weigh in on aspects of the Department’s new “American AI Exports Program,” an initiative intended to “promot[e] the export of full-stack American AI technology
Continue Reading Commerce Department Solicits Feedback on AI Exports ProgramArtificial Intelligence (AI)
California Governor Signs Landmark AI Safety Legislation
On September 29, California Governor Gavin Newsom (D) signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”), establishing public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power. TFAIA is the first frontier model safety…
Continue Reading California Governor Signs Landmark AI Safety LegislationSenator Cruz Unveils AI Framework and Regulatory Sandbox Bill
On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership. In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act…
Continue Reading Senator Cruz Unveils AI Framework and Regulatory Sandbox BillCalifornia Lawmakers Advance Suite of AI Bills
As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D). The AI bills, some of which have already passed both chambers, reflect…
Continue Reading California Lawmakers Advance Suite of AI BillsTrump Administration Issues AI Action Plan and Series of AI Executive Orders
On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda. In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in…
Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive OrdersTexas Enacts AI Consumer Protection Law
On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law. The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado…
Continue Reading Texas Enacts AI Consumer Protection LawCalifornia Frontier AI Working Group Issues Final Report on Frontier Model Regulation
On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. The report describes “frontier models” as the “most capable” subset of foundation models, or…
Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model RegulationNew York Legislature Passes Sweeping AI Safety Legislation
On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models. If signed into law by Governor Kathy Hochul…
Continue Reading New York Legislature Passes Sweeping AI Safety LegislationSenate Parliamentarian Clears Revised State AI Enforcement Moratorium for Reconciliation Bill, But Passage Remains in Doubt
In a surprise move, Senate Parliamentarian Elizabeth MacDonough ruled that a proposed moratorium on state and local AI laws satisfies the Byrd Rule, the requirement that reconciliation bills contain only budgetary provisions and omit “extraneous” policy language. While MacDonough’s determination allows the Senate Commerce Committee’s version of the moratorium to…
Continue Reading Senate Parliamentarian Clears Revised State AI Enforcement Moratorium for Reconciliation Bill, But Passage Remains in DoubtOECD Introduces AI Capability Indicators for Policymakers
On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.
There are nine categories of AI capability indicators, each one presented on a five-level scale mapping AI progression toward full human equivalence, with level 5 representing the most challenging capabilities for AI systems to attain. Each category rates AI performance and assumes human equivalent capability according to the latest available evidence as follows:
- Language – ranges from basic keyword recognition (Level 1) to contextually aware discourse generation and open-ended creative writing (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: reliable understanding and generation of semantic meaning using multi-modal language.
- Social interaction – ranges from social cue interpretation (Level 1) to representation of sophisticated emotion intelligence and multi-party conversational fluency (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: basic social perception with the ability to slightly adapt based on experience, emotions detected through tone and context, and limited social memory.
- Problem solving – ranges from rule-based task execution (Level 1) to new scenarios that require adaptive reasoning, long-term planning, and multi-step inference (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: integration of qualitative and quantitative reasoning to address complex problems and capable of handling multiple qualitative states and predicting how systems may evolve or change over time.
- Creativity – measures originality and generative capacity in art ranging from template-based generation (Level 1) to creation of entirely novel concepts (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: generation of output that deviates considerably from the training data and generalization of skills to new tasks and integrate ideas across domains.
- Metacognition and critical thinking – ranges from basic interpretation or recognition of information (Level 1) to managing complex trade-offs between goals, resources, and necessary skills (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: monitoring and adjustment of the system’s own understanding and approach according to each problem.
- Knowledge, learning, and memory – ranges from data ingestion efficiency and retention (Level 1) to insight-generation from disparate knowledge sources (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: understanding semantics of information through distributed representations and generalization to novel situations.
- Vision – ranges from basic object recognition (Level 1) to dynamic scene understanding and multi-object tracking under varied environmental conditions (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: adapting to variations in target object appearance and lighting, performing multiple subtasks, and coping with known variations in data and situations.
- Manipulation – ranges from fine motor control in robotics like picking up simple items (Level 1) to dexterous manipulation of deformable objects (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: handling different object shapes and moderately pliable materials and operating in controlled environments with low to moderate clutter.
- Robotic intelligence – integrates multiple subdomains like navigation, manipulation, and perception ranging from pre-programmed action (Level 1) to fully autonomous, self-learning robotic agents (Level 5). The OECD considers that the capability level of currently available robotic systems is Level 2: operating in partially known and semi-structured environments with some well-defined variability.
Continue Reading OECD Introduces AI Capability Indicators for Policymakers