Photo of Jennifer Johnson

Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington's global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.

I. Federal AI Legislative Developments

In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms.  Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law.  The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI.  Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included.  See our blog post on this topic for more information.

Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery.  For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI.  See our blog post on this topic for more information.  Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update

On July 29, 2025, the National Institute of Standards & Technology (“NIST”) unveiled an outline for preliminary, stakeholder-driven standards, known as a “zero draft”, for AI testing, evaluation, verification and validation (“TEVV”).  This outline is part of NIST’s AI Standards Zero Drafts pilot project, which was announced on March 25, 2025, as we previously reported. The goal is to create a flexible, high-level framework for companies to design their own AI testing and validation procedures. Of note, NIST is not prescribing exact methods for testing and validation. Instead, it offers a structure around key terms, lifecycle stages, and guiding principles that align with future international standards. NIST has asked for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, and feedback is open until September 12, 2025.

The NIST outline breaks AI TEVV into several foundational elements, a non-exhaustive list of which includes:Continue Reading NIST Welcomes Comments for AI Standards Zero Drafts Project

On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda.  In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in

Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive Orders

On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  The report describes “frontier models” as the “most capable” subset of foundation models, or

Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model Regulation

On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul

Continue Reading New York Legislature Passes Sweeping AI Safety Legislation

On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.

There are nine categories of AI capability indicators, each one presented on a five-level scale mapping AI progression toward full human equivalence, with level 5 representing the most challenging capabilities for AI systems to attain. Each category rates AI performance and assumes human equivalent capability according to the latest available evidence as follows:

  • Language – ranges from basic keyword recognition (Level 1) to contextually aware discourse generation and open-ended creative writing (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: reliable understanding and generation of semantic meaning using multi-modal language.
  • Social interaction – ranges from social cue interpretation (Level 1) to representation of sophisticated emotion intelligence and multi-party conversational fluency (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: basic social perception with the ability to slightly adapt based on experience, emotions detected through tone and context, and limited social memory.
  • Problem solving – ranges from rule-based task execution (Level 1) to new scenarios that require adaptive reasoning, long-term planning, and multi-step inference (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: integration of qualitative and quantitative reasoning to address complex problems and capable of handling multiple qualitative states and predicting how systems may evolve or change over time.
  • Creativity – measures originality and generative capacity in art ranging from template-based generation (Level 1) to creation of entirely novel concepts (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: generation of output that deviates considerably from the training data and generalization of skills to new tasks and integrate ideas across domains.
  • Metacognition and critical thinking – ranges from basic interpretation or recognition of information (Level 1) to managing complex trade-offs between goals, resources, and necessary skills (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: monitoring and adjustment of the system’s own understanding and approach according to each problem.
  • Knowledge, learning, and memory – ranges from data ingestion efficiency and retention (Level 1) to insight-generation from disparate knowledge sources (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: understanding semantics of information through distributed representations and generalization to novel situations.
  • Vision – ranges from basic object recognition (Level 1) to dynamic scene understanding and multi-object tracking under varied environmental conditions (Level 5). The OECD considers that the capability level of currently available AI systems is Level 3: adapting to variations in target object appearance and lighting, performing multiple subtasks, and coping with known variations in data and situations.
  • Manipulation – ranges from fine motor control in robotics like picking up simple items (Level 1) to dexterous manipulation of deformable objects (Level 5). The OECD considers that the capability level of currently available AI systems is Level 2: handling different object shapes and moderately pliable materials and operating in controlled environments with low to moderate clutter.
  • Robotic intelligence – integrates multiple subdomains like navigation, manipulation, and perception ranging from pre-programmed action (Level 1) to fully autonomous, self-learning robotic agents (Level 5). The OECD considers that the capability level of currently available robotic systems is Level 2: operating in partially known and semi-structured environments with some well-defined variability.

Continue Reading OECD Introduces AI Capability Indicators for Policymakers

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

I.  Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.

Transparency Requirements.  The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.”  Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.

Third-Party Risk Assessments.  Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.”  To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties. 

Whistleblower Protections.  Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers.  The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.  SB 53 proposes a significantly narrower approach compared to

Continue Reading California Senator Introduces AI Safety Bill

Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.

  • Consumer Protection.  Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act.  In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general.  They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system.  For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
  • Sector-Specific Automated Decision-makingLawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance.  For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance.  Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General.  Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT.  For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.

Continue Reading Blog Post: State Legislatures Consider New Wave of 2025 AI Legislation