Important changes to California’s pay-to-play law took effect January 1, 2025, and now the state’s regulations have caught up to the law.

The state’s Fair Political Practices Commission adopted the new regulations late last month, following statutory revisions to California’s complex pay-to-play law found at California Government Code § 84308

Continue Reading California Updates Pay-to-Play Law Regulations to Reflect Recent Law Changes

Many businesses use customer support software that may include call recording features to help ensure a better customer service experience.  A California federal court dismissed a wiretapping lawsuit filed against a software company offering this software tool (TalkDesk), holding that TalkDesk’s alleged recording of customers’ conversations with clothing retailers “is

Continue Reading Recording of Customer Service Call “Not Private or Personal Enough” to Confer Article III Standing

On April 3, the White House Office of Management and Budget (“OMB”) released two memoranda with AI guidance and requirements for federal agencies, Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos (collectively, the “new OMB AI Memos”), which rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 (“Biden OMB AI Memos”), shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”  The new OMB AI Memos implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (the “AI EO”), which directs the OMB to revise the Biden OMB AI Memos to make them consistent with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.” 

Overall, the new OMB AI Memos build on the frameworks established under President Trump’s 2020 Executive Order 13960 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and the Biden OMB AI Memos.  This is consistent with the AI EO, which noted that the Administration would “revise” the Biden AI Memos “as necessary.”  At the same time, the new OMB AI Memos include some significant differences from the Biden OMB’s approach in the areas discussed below (as well as other areas).

  • Scope & Definitions.  The OMB AI Use Memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies,” with certain exclusions for the Intelligence Community and the Department of Defense.  The memo defines “AI” by reference to Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019.  Like the Biden OMB AI Memos, the OMB AI Use Memo states that “no system should be considered too simple to qualify as covered AI due to a lack of technical complexity.”

    The OMB AI Procurement Memo applies to “AI systems or services that are acquired by or on behalf of covered agencies,” excluding the Intelligence Community, and includes “data systems, software, applications, tools, or utilities” that are “established primarily” for researching, developing, or implementing AI or where an “AI capability” is integrated into another process, operational activity, or technology system.  The memo excludes AI that is “embedded” in “common commercial products” that are widely available for commercial use and have “substantial non-AI purposes or functionalities,” along with AI “used incidentally by a contractor” during contract performance.  In other words, the policies are targeted at software that is primarily used for its AI capabilities, rather than on software that happens to incorporate AI.

Continue Reading OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies

Friday the White House released an executive summary of the policy reviews President Trump ordered in his America First Trade Policy (AFTP) memorandum, issued on January 20.  Although the full report to the President is nonpublic, according to the executive summary it contains twenty-four chapters, organized into three main

Continue Reading Agencies Deliver America First Trade Policy Recommendations to White House

President Trump recently issued two separate Executive Orders (EOs) that will have implications for how federal agencies seek to promote the administration’s goal of attracting domestic and foreign investment to industrial projects in the United States, with particular implications for the semiconductor and critical minerals industries. 

  1. An EO on March 31st establishes an “Investment Accelerator” office within the Department of Commerce that will be responsible for overseeing the implementation of the CHIPS Program—including the negotiation of agreements under the CHIPS Act.  This office will also provide technical and regulatory support for investors, and seek to facilitate research collaborations between private industry and national labs. 
  2. An earlier EO issued on March 20th seeks to mobilize federal lending and leasing authorities at the Department of Defense (DoD), the U.S. International Development Finance Corporation (DFC), and other federal agencies to support the development of domestic critical mineral projects.  Per an accompanying fact sheet, the White House is taking a broad interpretation of covered minerals under this March 20th Order and will seek to include materials such as coal. 

Both EOs are notable efforts by the White House to align federal spending and financial assistance programs with the Trump Administration’s priorities, which have variously included calls to promote self-sufficiency in critical materials and promoting “energy independence” and “energy dominance.”  These efforts come against a backdrop under which the Administration is also pursuing the use of tariffs to promote U.S. manufacturing, and taking steps to review and in some cases modify or terminate infrastructure or energy-related grants from the Biden-era.  More details are provided below.  Continue Reading Trump Administration Issues Executive Orders that Seek to Shape CHIPS Program and Promote Domestic Mineral Production

Last week, the Georgia state Senate authorized a sweeping investigation of former gubernatorial candidate Stacey Abrams, continuing a national trend of increased state legislative investigations.  Although state-level investigations continue to lag far behind congressional investigations, state legislatures appear to be replicating what we see on the federal level with increasing

Continue Reading Georgia Senate Launches Abrams-Focused Inquiry, Signaling Growing Risk of State Legislative Investigations

Kenya has released its first National Artificial Intelligence Strategy (2025–2030), a landmark document on the continent that sets out a government-led vision for ethical, inclusive, and innovation-driven AI adoption. Framed as a foundational step in the country’s digital transformation agenda, the strategy articulates policy ambitions that will be of interest to global companies developing, deploying, or investing in AI technologies across Africa.

While the strategy is explicitly domestic in focus, its framing—and the architecture of its governance, infrastructure, and data pillars—reflects a broader trend, i.e., the localization of global AI governance norms in high-growth, emerging markets.

What the Strategy Means for Global Technology Governance

The strategy touches on several themes that intersect with enterprise risk, product development, and regulatory foresight for multinationals:

  • Data governance and sovereignty: Kenya signals a strong intent to develop AI within national parameters, grounded in local data ecosystems. The strategy explicitly references data privacy, cybersecurity, and ethics as core enablers of the AI ecosystem. For global companies with cloud-based models or cross-border data transfer frameworks, these developments may signal localization pressures or evolving consent standards.
  • Sector-specific use cases: Healthcare, agriculture, financial services, and public administration are named as strategic AI priorities. Companies operating in the life sciences, health tech, or diagnostics space should watch closely for how regulatory authorities may interpret and apply ethical or risk-based AI guidelines—especially where AI is used in clinical decision-making, diagnostics, or personalized medicine.
  • Public-private AI infrastructure development: The strategy envisages expanded digital infrastructure, data centers, and cloud resources, as well as national research hubs. This may create commercial opportunities—but could also trigger localization requirements or procurement-related restrictions, particularly for telecommunications and hyperscale cloud providers.
  • Future legal frameworks: The current strategy is not itself a binding legal instrument, but it points to future policy development—especially around governance, regulatory oversight, and risk classification of AI systems. Teams advising on AI risk, litigation exposure, and AI-assisted products (including generative tools) will want to track the next wave of draft legislation and implementation guidance.

Continue Reading Kenya’s AI Strategy 2025–2030: Signals for Global Companies Operating in Africa

Four Internet of Things (IoT) related tax relief provisions are due to expire on December 31, 2025.  Two bills were introduced in Brazil’s National Congress to extend these provisions and are currently in debate under a fast-track rule.  Companies that provide and implement IoT projects can engage congressional leaders to

Continue Reading Brazil’s Internet of Things Tax Relief Due to Expire in 2025

On March 24, the Senate Judiciary Subcommittee on the Constitution held a hearing on the “Censorship Industrial Complex,” where senators and witnesses expressed divergent views on risks to First Amendment rights.  Senator Eric Schmitt (R-MO), the Subcommittee Chair, began the hearing by warning that the “vast censorship enterprise that the

Continue Reading Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.

Transparency Requirements.  The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.”  Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.

Third-Party Risk Assessments.  Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.”  To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties. 

Whistleblower Protections.  Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers.  The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation