AI

On December 19, New York Governor Kathy Hochul (D) signed the Responsible AI Safety & Education (“RAISE”) Act into law, making New York the second state in the nation to codify public safety disclosure and reporting requirements for developers of frontier AI models.  Prior to signing, Governor Hochul secured several

Continue Reading New York Governor Signs Frontier AI Safety Legislation

On December 11, President Trump signed an Executive Order on “Ensuring a National Policy Framework for Artificial Intelligence” (“AI Preemption EO”), the culmination of months of efforts by Republican lawmakers to assert federal primacy over AI regulation.  The AI Preemption EO, which follows the release of a draft version in

Continue Reading President Trump Signs Executive Order to Block State AI Laws

According to reports published on November 19, the White House has prepared a draft Executive Order to preempt state AI regulations in lieu of a uniform national legislative framework, marking a significant escalation in federal efforts to assert control over AI regulation.  The draft Executive Order, titled “Eliminating State

Continue Reading White House Drafts Executive Order to Preempt State AI Laws

The Commerce Department today published a Request for Information (RFI) inviting the public to submit comments on U.S. artificial intelligence exports.  The RFI asks stakeholders to weigh in on aspects of the Department’s new “American AI Exports Program,” an initiative intended to “promot[e] the export of full-stack American AI technology

Continue Reading Commerce Department Solicits Feedback on AI Exports Program

On September 29, California Governor Gavin Newsom (D) signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”), establishing public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power.  TFAIA is the first frontier model safety

Continue Reading California Governor Signs Landmark AI Safety Legislation

On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership.  In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act

Continue Reading Senator Cruz Unveils AI Framework and Regulatory Sandbox Bill

On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul

Continue Reading New York Legislature Passes Sweeping AI Safety Legislation

On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.

There are nine categories of AI capability indicators, each one presented on a five-level scale mapping AI progression toward full human equivalence, with level 5 representing the most challenging capabilities for AI systems to attain. Each category rates AI performance and assumes human equivalent capability according to the latest available evidence as follows:

  • Language – ranges from basic keyword recognition (Level 1) to contextually aware discourse generation and open-ended creative writing (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: reliable understanding and generation of semantic meaning using multi-modal language.
  • Social interaction – ranges from social cue interpretation (Level 1) to representation of sophisticated emotion intelligence and multi-party conversational fluency (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: basic social perception with the ability to slightly adapt based on experience, emotions detected through tone and context, and limited social memory.
  • Problem solving – ranges from rule-based task execution (Level 1) to new scenarios that require adaptive reasoning, long-term planning, and multi-step inference (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: integration of qualitative and quantitative reasoning to address complex problems and capable of handling multiple qualitative states and predicting how systems may evolve or change over time.
  • Creativity – measures originality and generative capacity in art ranging from template-based generation (Level 1) to creation of entirely novel concepts (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: generation of output that deviates considerably from the training data and generalization of skills to new tasks and integrate ideas across domains.
  • Metacognition and critical thinking – ranges from basic interpretation or recognition of information (Level 1) to managing complex trade-offs between goals, resources, and necessary skills (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: monitoring and adjustment of the system’s own understanding and approach according to each problem.
  • Knowledge, learning, and memory – ranges from data ingestion efficiency and retention (Level 1) to insight-generation from disparate knowledge sources (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: understanding semantics of information through distributed representations and generalization to novel situations.
  • Vision – ranges from basic object recognition (Level 1) to dynamic scene understanding and multi-object tracking under varied environmental conditions (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: adapting to variations in target object appearance and lighting, performing multiple subtasks, and coping with known variations in data and situations.
  • Manipulation – ranges from fine motor control in robotics like picking up simple items (Level 1) to dexterous manipulation of deformable objects (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: handling different object shapes and moderately pliable materials and operating in controlled environments with low to moderate clutter.
  • Robotic intelligence – integrates multiple subdomains like navigation, manipulation, and perception ranging from pre-programmed action (Level 1) to fully autonomous, self-learning robotic agents (Level 5). The OECD considers that the capability level of currently available robotic systems is Level 2: operating in partially known and semi-structured environments with some well-defined variability.

Continue Reading OECD Introduces AI Capability Indicators for Policymakers

In September, FTC Chairman Andrew Ferguson called for the FTC to regulate artificial intelligence claims through its existing consumer protection authorities:  “Imposing comprehensive regulations at the incipiency of a potential technological revolution would be foolish.  For now, we should limit ourselves to enforcing existing laws against illegal conduct when it

Continue Reading FTC Challenges Deceptive Artificial Intelligence Claims

House Republicans have passed through committee a nationwide, 10-year moratorium on the enforcement of state and local laws and regulations that impose requirements on AI and automated decision systems.  The moratorium, which would not apply to laws that promote AI adoption, highlights the widening gap between a wave of new

Continue Reading House Republicans Push for 10-Year Moratorium on State AI Laws