On November 20, 2025, the Securities and Exchange Commission (“SEC”) announced that it was voluntarily dismissing the case it brought against SolarWinds Corp. (“SolarWinds”) and its information security officer, Timothy Brown, regarding the company’s security practices and related statements in connection with the “Sunburst” cybersecurity incident. The SEC stated in a brief release that its decision to dismiss with prejudice the case against SolarWinds and Mr. Brown was “in the exercise of its discretion” and “does not necessarily reflect the Commission’s position on any other case.”Continue Reading SEC Voluntarily Dismisses SolarWinds Litigation
Jess Gonzalez Valenzuela
Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s San Francisco office, specializing in the Data Privacy and Cybersecurity Practice Group. Jess assists clients with cybersecurity issues such as incident response, risk management, internal investigations, and regulatory compliance. Additionally, Jess supports clients navigating complex data privacy challenges by offering regulatory compliance guidance tailored to specific business practices. Jess is also a member of the E-Discovery, AI, and Information Governance Practice Group and maintains an active pro bono practice.
Jess is committed to Diversity, Equity, and Inclusion (DEI) initiatives within the legal field. They are a member of Covington’s LGBTQ+ and Latino Firm Resource Groups, and serve as is co-lead for the First Generation Professionals Network and Disability and Neurodiversity Network in the San Francisco office.
U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update
This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.
I. Federal AI Legislative Developments
In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms. Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law. The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI. Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included. See our blog post on this topic for more information.
Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery. For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI. See our blog post on this topic for more information. Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update
U.S. Tech Legislative & Regulatory Update – First Quarter 2025
This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain.
I. Artificial Intelligence
I. Federal Legislative Developments
In the first quarter, members of Congress introduced several AI bills addressing…
Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024
This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”). As noted below, some of these developments provide industry with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Legislative Developments
There continued to be strong bipartisan interest in passing federal legislation related to AI. While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.
- Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks.
- In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV). The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations.
- In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July. Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.
- In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ). The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.
- In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended. Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements. The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
- Senate Homeland Security and Governmental Affairs Committee: In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495). Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
- National Defense Authorization Act for Fiscal Year 2025: In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”). The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA. The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems. The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI. The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.
Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024