Photo of Andrew Longhi

Andrew Longhi

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Technology and Communications Regulation Practice Groups.

Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial transactions involving personal information and cybersecurity risk, and responses to regulatory inquiries.

Andrew is Admitted to the Bar under DC App. R. 46-A (Emergency Examination Waiver); Practice Supervised by DC Bar members.

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”

The letter proposes “a handful of specific areas” for revision, including:

  • Refining SB 205’s definition of AI systems to focus on “the most high-risk systems” in order to align with federal measures and frameworks in states with substantial technology sectors.  This goal aligns with the officials’ call for “harmony across any regulatory framework adopted by states” to “limit the burden associated with a multi-state compliance scheme that deters investment and hamstrings small technology firms.”  The officials add that they “remain open to delays in the implementation” of the new law “to ensure such harmonization.”  
  • Narrowing SB 205’s requirements to focus on developers of high-risk systems and avoid regulating “small companies that may deploy AI within third-party software that they use in the ordinary course of business.”  This goal addresses concerns of Colorado businesses that the new law could “inadvertently impose prohibitively high costs” on AI deployers.
  • Shifting from a “proactive disclosure regime” to a “traditional enforcement regime managed by the Attorney General investigating matters after the fact.”  This goal also focuses on protecting Colorado’s small businesses from prohibitively high costs that could deter investment and hamper Colorado’s technology sector.

Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane

On May 2, 2024, the Federal Communications Commission (FCC) released a draft Notice of Proposed Rulemaking (NPRM) for consideration at the agency’s May 23 Open Meeting that proposes to “prohibit from recognition by the FCC and participation in [its] equipment authorization program, any [Telecommunications Certification Body (TCB)] or test lab in which an entity identified

On April 2, the Enforcement Division of the California Privacy Protection Agency issued its first Enforcement Advisory, titled “Applying Data Minimization to Consumer Requests.”  The Advisory highlights certain provisions of and regulations promulgated under the California Consumer Privacy Act (“CCPA”) that “reflect the concept of data minimization” and provides two examples that illustrate how

A new post on the Covington Inside Privacy blog discusses remarks by California Privacy Protection Agency (CPPA) Executive Director Ashkan Soltani at the International Association of Privacy Professionals’ global privacy conference last week.  The remarks covered the CPPA’s priorities for rulemaking and administrative enforcement of the California Consumer Privacy Act, including with respect to connected

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023

The Federal Trade Commission’s (“FTC”) Office of Technology announced that it will hold a half-day virtual “FTC Tech Summit” on January 25, 2024 to address key developments in the field of artificial intelligence (“AI”).

The FTC’s event website notes that the Summit will “bring together a diverse set of perspectives across academia, industry, civil society

On October 10, 2023, California Governor Gavin Newsom signed S.B. 362, the Delete Act (the “Act”), into law.  The new law represents a substantive overhaul of California’s existing data broker statute, which requires data brokers to register with the California Attorney General annually.  The passage of the Act follows a renewed interest in data

On July 18, 2023, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel announced that she has circulated a proposal to the FCC’s commissioners to create “a voluntary cybersecurity labeling program that would provide consumers with clear information about the security of their Internet-enabled devices.”

According to the text of her announcement (the proposal itself is not

On June 20, 2023, the Federal Communications Commission (“FCC”) released a Notice of Proposed Rulemaking (“NPRM”) to require cable operators and direct broadcast satellite (“DBS”) providers to display an “all-in” price for their video programming services in their billing and marketing materials.  The White House issued a press release that same day expressing its support