Artificial Intelligence (AI)

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.

Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:

  1. Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
  2. Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
  3. Transparency reporting on advanced AI systems – includes questions regarding, among others,  reports and technical documentation and transparency practices.
  4. Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
  5. Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
  6. Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
  7. Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.

Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices

On January 3, 2025, the Federal Trade Center (“FTC”) announced that it reached a settlement with accessiBe, a provider of AI-powered web accessibility software, to resolve allegations that the company violated Section 5 of the FTC Act concerning the marketing and stated efficacy of its software. 

The complaint alleges that

Continue Reading AI Accessibility Software Provider Settles FTC Allegations

This is the first blog in a series covering the Fiscal Year 2025 National Defense Authorization Act (“FY 2025 NDAA”).  This first blog will cover: (1) NDAA sections affecting acquisition policy and contract administration that may be of greatest interest to government contractors; (2) initiatives that underscore Congress’s commitment to strengthening cybersecurity, both domestically and internationally; and (3) NDAA provisions that aim to accelerate the Department of Defense’s adoption of AI and Autonomous Systems and counter efforts by U.S. adversaries to subvert them. 

Future posts in this series will address NDAA provisions targeting China, supply chain and stockpile security, the revitalized Administrative False Claims Act, and Congress’s effort to mature the Office of Strategic Capital and leverage private investment to accelerate the development of critical technologies and strengthen the defense industrial base.  Subscribe to our blog here so that you do not miss these updates.

FY 2025 NDAA Overview

On December 23, 2025, President Biden signed the FY 2025 NDAA into law.  The FY 2025 NDAA authorizes $895.2 billion in funding for the Department of Defense (“DoD”) and Department of Energy national security programs—a $9 billion or 1 percent increase over 2024.  NDAA authorizations have traditionally served as a reliable indicator of congressional sentiment on final defense appropriations. 

FY 2025 marks the 64th consecutive year in which an NDAA has been enacted, reflecting its status as “must-pass” legislation.  As in prior years, the NDAA has been used as a legislative vehicle to incorporate other measures, including the FY 2025 Department of State and Intelligence Authorization Acts, as well as provisions related to the Departments of Justice, Homeland Security, and Veterans Affairs, among others.

Below are select provisions of interest to companies across industries that engage in U.S. Government contracting, including defense contractors, technology providers, life sciences firms, and commercial-item suppliers.Continue Reading President Biden signs the National Defense Authorization Act for Fiscal Year 2025

Technology companies will be in for a bumpy ride in the second Trump Administration.  President-elect Trump has promised to adopt policies that will accelerate the United States’ technological decoupling from China.  However, he will likely take a more hands-off approach to regulating artificial intelligence and reverse several Biden Administration policies related to AI and other emerging technologies.Continue Reading Tech Policy in a Second Trump Administration: AI Promotion and Further Decoupling from China

On July 9, 2024, the FTC and California Attorney General settled a case against NGL Labs (“NGL”) and two of its co-founders. NGL Labs’ app, “NGL: ask me anything,” allows users to receive anonymous messages from their friends and social media followers. The complaint alleged violations of the FTC Act

Continue Reading FTC Reaches Settlement with NGL Labs Over Children’s Privacy & AI

On May 20, 2024, a proposal for a law on artificial intelligence (“AI”) was laid before the Italian Senate.

The proposed law sets out (1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law. 

We provide below an overview of the proposal’s key provisions.

Objectives and General Principles

The proposed law aims to promote a “fair, transparent and responsible” use of AI, following a human-centered approach, and to monitor potential economic and social risks, as well as risks to fundamental rights.  The law will sit alongside and complement the EU AI Act (for more information on the EU AI Act, see our blogpost here).  (Article 1)

The proposed law sets out general principles, based on the principles developed by the Commission’s High-level expert group on artificial intelligence, pursuing three broad objectives:

  1. Fair algorithmic processing. Research, testing, development, implementation and application of AI systems must respect individuals’ fundamental rights and freedoms, and the principles of transparency, proportionality, security, protection of personal data and confidentiality, accuracy, non-discrimination, gender equality and inclusion.
  2. Protection of data. The development of AI systems and models must be based on data and processes that are proportionate to the sectors in which they’re intended to be used, and ensure that data is accurate, reliable, secure, qualitative, appropriate and transparent.  Cybersecurity throughout the systems’ lifecycle must be ensured and specific security measures adopted.
  3. Digital sustainability. The development and implementation of AI systems and models must ensure human autonomy and decision-making, prevention of harm, transparency and explainability.  (Article 3)

Continue Reading Italy Proposes New Artificial Intelligence Law

Earlier this week, Members of the European Parliament (MEPs) cast their votes in favor of the much-anticipated AI Act. With 523 votes in favor, 46 votes against, and 49 abstentions, the vote is a culmination of an effort that began in April 2021, when the EU Commission first published its 

Continue Reading EU Parliament Adopts AI Act

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in China. The previous articles in this series covered the European Union and the United States.

On the sidelines of November’s APEC meetings in San Francisco, Presidents Joe Biden and Xi Jinping agreed that their nations should cooperate on the governance of artificial intelligence. Just weeks prior, President Xi unveiled China’s Global Artificial Intelligence Governance Initiative to world leaders, the nation’s bid to put its stamp on the global governance of AI. This announcement came a day after the Biden Administration revealed another round of restrictions on the export of advanced AI chips to China.

China is an AI superpower. Projections suggest that China’s AI market is on track to exceed US$14 billion this year, with ambitions to grow tenfold by 2030. Major Chinese tech companies have unveiled over twenty large language models (LLMs) to the public, and more than one hundred LLMs are fiercely competing in the market.

Understanding China’s capabilities and intentions in the realm of AI is crucial for policymakers in the U.S. and other countries to craft effective policies toward China, and for multinational companies to make informed business decisions. Irrespective of political differences, as an early mover in the realm of AI policy and regulation, China can serve as a repository of pioneering experiences for jurisdictions currently reflecting on their policy responses to this transformative technology.

This article aims to advance such understanding by outlining key features of China’s emerging approach toward AI.Continue Reading Spotlight Series on Global AI Policy — Part III: China’s Policy Approach to Artificial Intelligence

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act  (for

Continue Reading Spain Creates AI Regulator to Enforce the AI Act