Photo of Jennifer Johnson

Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington's global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

I.  Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.

Transparency Requirements.  The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation and recommends that policymakers “prioritize public-facing transparency to best advance accountability.”  Specifically, the report recommends transparency requirements that focus on five categories of information about foundation models: (1) training data acquisition, (2) developer safety practices, (3) developer security practices, (4) pre-deployment testing by developers and third parties, and (5) downstream impacts, potentially including disclosures from entities that host foundation models for download or use.

Third-Party Risk Assessments.  Noting that transparency “is often insufficient and requires supplementary verification mechanisms” for accountability, the report adds that third-party risk assessments are “essential” for “creating incentives for developers to increase the safety of their models.”  To support effective third-party AI evaluations, the report calls on policymakers to consider establishing safe harbors that indemnify public interest safety research and “routing mechanisms” to quickly communicate identified vulnerabilities to developers and affected parties. 

Whistleblower Protections.  Additionally, the report assesses the need for whistleblower protections for employees and contractors of foundation model developers.  The report advises policymakers to “consider protections that cover a broader range of [AI developer] activities,” such as failures to follow a company’s AI safety policy, even if reported conduct does not violate existing laws. Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.  SB 53 proposes a significantly narrower approach compared to

Continue Reading California Senator Introduces AI Safety Bill

Authors: Jennifer Johnson, Jayne Ponder, August Gweon, Analese Bridges

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.

  • Consumer Protection.  Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act.  In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general.  They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system.  For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
  • Sector-Specific Automated Decision-makingLawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance.  For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance.  Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General.  Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT.  For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.

Continue Reading Blog Post: State Legislatures Consider New Wave of 2025 AI Legislation

Yesterday, the Trump Administration issued an Executive Order titled “Ensuring Accountability for All Agencies” (the EO).  The EO asserts Presidential authority over independent agencies, including the Federal Trade Commission (FTC), Federal Communications Commission (FCC), and Securities and Exchange Commission (SEC).  While the precise impacts remain to be seen, overall the EO will likely result in greater involvement by the White House in policymaking at independent agencies, both in substance and process.

OIRA Review of Agency Regulations.  The EO amends the Clinton Administration-era Executive Order 12866, which established a review process for regulations promulgated by executive branch departments and agencies but excluded independent agencies from that process.  The process includes requirements that departments and agencies submit “significant regulatory actions” to the Office of Information and Regulatory Affairs (OIRA) for review before publication in the Federal Register.  Executive Order 12866 defines “significant regulatory action” to mean “any regulatory action that is likely to result in a rule that may:”

  1. Have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities;
  2. Create a serious inconsistency or otherwise interfere with an action taken or planned by another agency;
  3. Materially alter the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or
  4. Raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in this Executive order.

Yesterday’s EO revises the definition of “agencies” to remove an exemption for “independent regulatory agencies.”  The amended definition includes an exemption for the Federal Reserve “in its conduct of monetary policy.”

Performance Standards and Management Objectives.  The EO directs the Director of the Office of Management and Budget (OMB) to “establish performance standards and management objectives for independent agency heads” and “report periodically to the President on their performance and efficiency in attaining such standards and objectives.”Continue Reading Trump Administration Asserts Presidential Authority Over Independent Agencies

On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.”  The RFI marks a first step toward the implementation of the Trump Administration’s January

Continue Reading Trump Administration Seeks Public Comment on AI Action Plan

On December 4, 2024, the Federal Communications Commission (the “Commission”) announced that it had selected UL Solutions to serve as the Lead Administrator for its Internet of Things Cybersecurity Labeling Program (the “IoT Labeling Program”).  The Commission also conditionally approved UL Solutions as a Cybersecurity Label Administrator (“CLA”) for the

Continue Reading FCC Takes Next Step Towards U.S. Cyber Trust Mark

On November 20, 2024, the Federal Communications Commission (the “Commission”) issued a Second Report and Order in which it adopted rules (“the Order”) to facilitate the transition to from Dedicated Short Range Communications (“DSRC”) technology to Cellular-Vehicle-to-Everything (“C-V2X”) technology for the Intelligent Transportation System (“ITS” also referred to as the

Continue Reading FCC Adopts Rules Facilitating the Transition to C-V2X Technology for the Connected Vehicle Ecosystem

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.      Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

On July 30, 2024, the Federal Register published the Federal Communications Commission (the “FCC”) Report and Order (the “Order”) creating a voluntary cybersecurity labeling program for Internet of Things (“IoT”) devices.  As reported in our blog post issued shortly before the Order was approved on March 14, 2024, this program

Continue Reading FCC Adopts Order Establishing Voluntary IoT Labeling Program