OECD

On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.

There are nine categories of AI capability indicators, each one presented on a five-level scale mapping AI progression toward full human equivalence, with level 5 representing the most challenging capabilities for AI systems to attain. Each category rates AI performance and assumes human equivalent capability according to the latest available evidence as follows:

  • Language – ranges from basic keyword recognition (Level 1) to contextually aware discourse generation and open-ended creative writing (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: reliable understanding and generation of semantic meaning using multi-modal language.
  • Social interaction – ranges from social cue interpretation (Level 1) to representation of sophisticated emotion intelligence and multi-party conversational fluency (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: basic social perception with the ability to slightly adapt based on experience, emotions detected through tone and context, and limited social memory.
  • Problem solving – ranges from rule-based task execution (Level 1) to new scenarios that require adaptive reasoning, long-term planning, and multi-step inference (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: integration of qualitative and quantitative reasoning to address complex problems and capable of handling multiple qualitative states and predicting how systems may evolve or change over time.
  • Creativity – measures originality and generative capacity in art ranging from template-based generation (Level 1) to creation of entirely novel concepts (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: generation of output that deviates considerably from the training data and generalization of skills to new tasks and integrate ideas across domains.
  • Metacognition and critical thinking – ranges from basic interpretation or recognition of information (Level 1) to managing complex trade-offs between goals, resources, and necessary skills (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: monitoring and adjustment of the system’s own understanding and approach according to each problem.
  • Knowledge, learning, and memory – ranges from data ingestion efficiency and retention (Level 1) to insight-generation from disparate knowledge sources (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: understanding semantics of information through distributed representations and generalization to novel situations.
  • Vision – ranges from basic object recognition (Level 1) to dynamic scene understanding and multi-object tracking under varied environmental conditions (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: adapting to variations in target object appearance and lighting, performing multiple subtasks, and coping with known variations in data and situations.
  • Manipulation – ranges from fine motor control in robotics like picking up simple items (Level 1) to dexterous manipulation of deformable objects (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: handling different object shapes and moderately pliable materials and operating in controlled environments with low to moderate clutter.
  • Robotic intelligence – integrates multiple subdomains like navigation, manipulation, and perception ranging from pre-programmed action (Level 1) to fully autonomous, self-learning robotic agents (Level 5). The OECD considers that the capability level of currently available robotic systems is Level 2: operating in partially known and semi-structured environments with some well-defined variability.

Continue Reading OECD Introduces AI Capability Indicators for Policymakers

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.

Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:

  1. Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
  2. Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
  3. Transparency reporting on advanced AI systems – includes questions regarding, among others,  reports and technical documentation and transparency practices.
  4. Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
  5. Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
  6. Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
  7. Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.

Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices

G7 leaders met last week for the first-time ever in Brussels, and for the first time since 1998 without Russia.  They gave Russia a strong warning and then proceeded to make substantive headway without them on international tax, energy, trade and investment, security and development issues in ways that directly
Continue Reading G7 Leaders* Tackle Growth, Stability and Tax Bases; Warn Absent Russia