This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Federal Regulatory Developments

  • Federal Communications Commission (“FCC”): FCC Chairwoman Jessica Rosenworcel asked the Commission to approve a Notice of Proposed Rulemaking (“NPRM”) seeking comment on a proposal to require a disclosure when political ads on radio and television contain AI-generated content.  According to the FCC’s press release, the proposal would require an on-air disclosure when a political ad—whether from a candidate or an issue advertiser—contains AI-generated content.  The requirements would apply only to those entities currently subject to the FCC’s political advertising rules, meaning it would not encompass online political advertisements.  Shortly after Chairwoman Rosenworcel’s statement, Commissioner Brendan Carr issued a statement indicating that there is disagreement within the Commission concerning the appropriateness of FCC intervention on this topic.
  • Department of Homeland Security (“DHS”): DHS announced the establishment of the AI Safety and Security Board (the “Board”), which will advise the DHS Secretary, the critical infrastructure community, private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. In addition, DHS Secretary Alejandro N. Mayorkas and Chief AI Officer Eric Hysen announced the first ten members of the AI Corps, DHS’s effort to recruit 50 AI technology experts to play pivotal roles in responsibly leveraging AI across strategic mission areas.
  • The White House: The White House issued a press release detailing the steps that federal agencies have taken in line with the mandates established by the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (the “AI Executive Order”).  In sum, federal agencies reported that they had timely completed all of the 180-day actions mandated by the AI Executive Order.  The White House also announced new principles to protect workers from dangers posed by AI, including ethically developing AI, establishing AI governance with human oversight, and ensuring responsible use of worker data.
  • President’s Council of Advisors on Science and Technology (“PCAST”): PCAST released a report that recommends new actions that will help the United States harness the power of AI to accelerate scientific discovery.  The report provides examples of research areas in which AI is already impactful and discusses practices needed to ensure effective and responsible use of AI technologies.  Specific recommendations include expanding existing efforts, such as the National Artificial Intelligence Research Resource pilot, to broadly and equitably share basic AI resources, and expanding secure and responsible access of anonymized federal data sets for critical research needs.
  • U.S. Patent and Trademark Office (“USPTO”): The USPTO published a guidance on the use of AI-based tools in practice before the USPTO.  The guidance informs practitioners and the public of the issues that patent and trademark professionals, innovators, and entrepreneurs must navigate while using AI in matters before the USPTO.  The guidance also highlights that the USPTO remains committed to not only maximizing the benefits of AI and seeing them distributed broadly across society, but also using technical mitigations and human governance to cabin risks arising from AI use in practice before the USPTO.
  • National Security Agency (“NSA”): The NSA released a Cybersecurity Information Sheet (“CSI”) titled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.”  As the first CSI led by the Artificial Intelligence Security Center, the CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.

State Legislative Developments

  • Algorithmic Discrimination & Consumer Protection: The Colorado AI Act (SB 205) was signed into law on May 17, making Colorado the first state to enact AI legislation addressing risks of algorithmic discrimination in the development and deployment of AI.  The Act, which comes into effect February 1, 2026, primarily regulates the use of “high risk AI,” or AI systems that make or are a substantial factor in making consequential decisions on behalf of consumers.  Key requirements include a duty of care for AI developers and deployers to prevent algorithmic discrimination, developer disclosures of information about training data, performance, and discrimination safeguards, reporting to the state Attorney General of risks or instances of algorithmic discrimination, deployer “risk management policies and programs” for mitigating algorithmic discrimination risks, deployer algorithmic discrimination impact assessments, notices to consumers affected by AI consequential decisions, and opportunities for consumers to correct personal data and appeal adverse decisions.  On June 13, Colorado Governor Jared Polis, Colorado Attorney General Phil Weiser, and Colorado Senate Majority Leader Robert Rodriquez issued a public letter announcing a “process to revise” the Act to “minimize unintended consequences associated with its implementation” and consider “delays in the implementation of this law to ensure . . . harmonization” with other state and federal frameworks.
  • Election-Related Synthetic Content Laws: Alabama (HB 172), Arizona (SB 1359), Colorado (HB 1147), Florida (HB 919), Hawaii (SB 2687), Mississippi (SB 2577), and New York (A 8808) enacted laws regulating the creation or dissemination of AI-generated election content or political advertisements, joining Idaho, Indiana, Michigan, New Mexico, Oregon, Utah, Washington, Wisconsin, and other states that enacted similar laws in late 2023 and early 2024.  New Hampshire (HB 1596) passed a similar law that is awaiting the Governor’s signature.  These laws generally prohibit, within 90 days of an election, the knowing creation or distribution of deceptive content created or modified by AI if such content depicts candidates, election officials, or parties, or is intended to influence voting behavior or injure a candidate.  Some of these laws permit the distribution of prohibited content if they contain an audio or visual disclaimer that the content is AI-generated.  Other laws, like Arizona SB 1359, impose independent requirements that deepfakes of candidates or political parties contain AI disclaimers within 90 days of an election.
  • AI-Generated CSAM & Intimate Imagery Laws: Alabama (HB 168), Arizona (HB 2394), Florida (SB 1680), Louisiana (SB 6), New York (A 8808), North Carolina (HB 591), and Tennessee (HB 2163) enacted laws regulating the creation or dissemination of AI-generated CSAM or intimate imagery, joining Idaho, Indiana, South Dakota, and Washington.  These laws generally impose criminal liability for the knowing creation, distribution, solicitation, or possession of AI- or computer-generated CSAM, or the dissemination of AI-generated intimate imagery with intent to coerce, harass, or intimidate.
  • Laws Regulating AI-Generated Impersonations & Digital Replicas: Arizona (HB 2394) enacted  a law prohibiting the publication or distribution of digital replicas and digital impersonations without the consent of the person depicted.  Illinois (HB 4875) passed a similar bill that is awaiting the Governor’s signature.  Illinois (HB 4762) also passed a bill regulating services contracts that allow for the creation or use of digital replicas in place of work that the individual would otherwise have performed, rendering such provisions unenforceable if they do not contain a reasonably specific description of the intended uses of the digital replica and if the individual was not properly represented when negotiating the services contract.  This bill also awaits the Governor’s signature.
  • California AI Bills Regulating Frontier Models, Training Data, Content Labeling, and Social Media Platforms: On May 20, the California Assembly passed AB 2013, which would require AI developers to issue public statements summarizing datasets used to develop their AI systems, and AB 2877, which requires AI developers to receive affirmative authorization before using personal information from persons under sixteen years of age to train AI.  On May 21, the California Assembly passed AB 1791, which would require social media platforms to redact personal provenance data and add content labels and “system provenance data” for user-uploaded content, and AB 2930, a comprehensive bill that would regulate the use of “automated decision tools” and, like Colorado SB 205, would impose impact assessment, notice, and disclosure requirements on developers and deployers of automated decision-making systems used to make consequential decisions, with the goal of mitigating algorithmic discrimination risks.  On the same day, the California Senate passed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which would impose sweeping regulations on developers of the most powerful AI models, and the California AI Transparency Act (SB 942), which would require generative AI providers to create “AI detection tools” and add disclosures to AI content.  On May 22, the California Assembly passed the Provenance, Authenticity, and Watermarking Standards Act (AB 3211), which would require generative AI providers to ensure that outputs are labeled with watermarks and require large online platforms to add “provenance disclosures” to content on their platforms.

AI Litigation Developments

  • New Copyright Complaints
    • On June 27, the Center for Investigative Reporting, a nonprofit media organization, filed a complaint against OpenAI and Microsoft alleging copyright infringement from use of the plaintiff’s copyrighted works to train ChatGPT.  Center for Investigative Reporting, Inc v. OpenAI et al., 1:24-cv-4872 (S.D.N.Y.).
    • On June 24, Universal Music Group, Sony Music Entertainment, Warner Records, and other record labels filed complaints against Suno and Udio, companies that allegedly used copyrighted sound recordings to train generative AI models that “generate digital music files that sound like genuine human sound recordings in response to basic inputs.”  UMG Recordings, Inc. et al. v. Suno, Inc. et al., 1:24-cv-11611 (D. Mass.), and UMG Recordings, Inc. et al. v. Unchartered Labs, Inc. et al., 1:24-cv-04777 (S.D.N.Y.).
    • On May 16, several voice actors filed a complaint against Lovo, Inc., a company that allegedly uses AI-driven software to create and edit voice-over narration, claiming that Lovo used their voices without authorization.  Lehrman et al. v. Lovo, Inc., 1:24-cv-3770 (S.D.N.Y.). 
    • On May 2, authors filed a putative class action lawsuit against Databricks, Inc. and Mosaic ML, and another against Nvidia, alleging that the companies used copyrighted books to train their models.  Makkai et al. v. Databricks, Inc. et al., 4:24-CV-02653 (N.D. Cal), and Dubus et al. v. Nvidia, 4:24-cv-02655 (N.D. Cal). 
    • On April 26, a group of photographers and cartoonists sued Google, alleging that Google used their copyrighted images to train its AI image generator, Imagen.  Zhang et al. v Google LLC et al., 5:24-cv-02531 (N.D. Cal.).
    • On April 30, newspaper publishers who publish electronic copies of older print editions of their respective newspapers filed a complaint in Daily News et al. v. Microsoft et al., 1:24-cv-03285 (S.D.N.Y.), alleging, among other things, that the defendants copied those publications to train GPT models.
  • Copyright and Digital Millennium Copyright Act (“DMCA”) Case Developments:
    • On June 24, the court in Doe v. GitHub, Inc. et al., 4:22-cv-6823 (N.D. Cal.), partially granted GitHub’s motion to dismiss.  Among other things, the court granted the motion with prejudice as to plaintiffs’ copyright management removal claim under the DMCA, again finding that plaintiffs had failed to satisfy the “identicality” requirement.  The court declined to dismiss claims over breach of open-source license terms. 
    • On May 7, the court in Andersen v. Stability AI, 3:23-cv-00201 (N.D. Cal.), issued a tentative ruling on the defendants’ motions to dismiss the first amended complaint.  Among other things, the court was inclined to deny all the motions as to direct and “induced” copyright infringement and DMCA claims, to rule that there were sufficient allegations to support a “compressed copies” theory (i.e., that the plaintiffs’ works are contained in the AI models at issue such that when the AI models are copied, so are the works used to train the model), to allow the false endorsement and trademark claims to proceed, and to give the plaintiffs a chance to plead an unjust enrichment theory not preempted by the Copyright Act.  The court has yet to issue a final ruling.
  • Class Action DismissalsOn May 24, the court in A.T. et al v. OpenAI LP et al.,3:23-cv-04557 (N.D. Cal.), granted the defendants’ motion to dismiss with leave to amend, holding that the plaintiffs had violated Federal Rule 8’s “short and plain statement” requirement.  The court described the plaintiffs’ 200-page complaint, which alleged ten privacy-related statutory and common law violations, as full of “unnecessary and distracting allegations” and “rhetoric and policy grievances,” cautioning the plaintiffs that if the amended complaint continued “to focus on general policy concerns and irrelevant information,” dismissal would be with prejudice.  On June 14, plaintiffs notified the court that they did not intend to file an amended complaint.  The plaintiff in A.S. v. OpenAI LP, et al.,3:24-cv-01190 (N.D. Cal.), a case with similar claims to A.T., voluntarily dismissed their case after the decision in A.T.
  • Consent Judgment in Right of Publicity Case: On June 18, a consent judgment was entered in the suit brought by the estate of George Carlin against a podcast company over its allegedly AI-generated “George Carlin Special.”  Main Sequence, Ltd. Et al v. Dudesy, LLC et al, 24-cv-00711 (C.D. Cal). 

II. Connected & Automated Vehicles

  • Continued Focus on Connectivity and Domestic Violence: Following letters sent to automotive manufacturers and a press release issued earlier this year, on April 23, 2024, the FCC issued a NPRM seeking comment on the types of connected car services in the marketplace today, whether changes to the FCC’s rules implementing the Safe Connections Act are needed to address the impact of connected car services on domestic violence survivors, and what steps connected car service providers can proactively take to protect survivors from being stalked or harassed through the misuse of connected car services.  On April 25, Rep. Debbie Dingell (D-MI) wrote a letter to the Chairwoman of the FCC noting that she would like to “work with the FCC, [her] colleagues in Congress, and stakeholders to develop a comprehensive understanding of and solutions to the misuse of connected vehicle technologies” in relation to domestic abuse and “implement effective legislative and regulatory frameworks that safeguard survivors’ rights and well-being.”
  • Updated National Public Transportation Safety Plan: On April 9, 2024, the Federal Transit Administration (“FTA”) published an updated version of the National Public Transportation Safety Plan.  The FTA noted that the National Safety Plan “does not create new mandatory standards but rather identifies existing voluntary minimum safety standards and recommended practices,” but that FTA will “consider[] mandatory requirements or standards where necessary and supported by data” and “establish any mandatory standards through separate regulatory processes.”
  • Investigations into Data Retention Practices: On April 30, Sen. Ron Wyden (D-OR) and Sen. Edward Markey (D-MA) sent a letter to the Federal Trade Commission (“FTC”) asking the FTC to investigate several automakers for “deceiving their customers by falsely claiming to require a warrant or court order before turning over customer location data to government agencies” and urging the FTC to “investigate these auto manufacturers’ deceptive claims as well as their harmful data retention practices” and “consider holding these companies’ senior executives accountable for their actions.”  This letter follows other, similar, letters Sen. Markey sent to automakers and the FTC in December 2023 and February 2024, respectively.  Following this activity, on May 14, the FTC published a blog post on the collection and use of consumer data in vehicles, warning that “[c]ar manufacturers–and all businesses–should take note that the FTC will take action to protect consumers against the illegal collection, use, and disclosure of their personal data,” including geolocation data.
  • AI Roadmap – CAV Highlights: The AI Roadmap, discussed above, encourages committees to: (1) “develop emergency appropriations language to fill the gap between current spending levels and the [spending level proposed by the National Security Commission on Artificial Intelligence (“NSCAI”)],” including “[s]upporting R&D and interagency coordination around the intersection of AI and critical infrastructure, including for smart cities and intelligent transportation system technologies”; and (2) “[c]ontinue their work on developing a federal framework for testing and deployment of autonomous vehicles across all modes of transportation to remain at the forefront of this critical space.”
  • Senate Hearing on Roadway Safety: On May 21, the Subcommittee on Surface Transportation, Maritime, Freight & Ports within the U.S. Senate Committee on Commerce, Science & Transportation convened a hearing entitled “Examining the Roadway Safety Crisis and Highlighting Community Solutions.”  Sen. Gary Peters (D-MI), Chair of the Subcommittee, stated in his opening statement that “digital infrastructure that improves crash response to predictive road maintenance and active traffic management” are “essential to achieving safe system goals” and that “safe and accountable development, testing, and deployment of autonomous vehicles” can “help us reduce serious injuries and death on our roadways.”
  • Connected Vehicle National Security Review Act: On May 29, Rep. Elissa Slotkin (D-MI) announced proposed legislation entitled the Connected Vehicle National Security Review Act, which would establish a formal national security review for connected vehicles built by companies from China or certain other countries.  The legislation would allow the Department of Commerce to limit or ban the introduction of these vehicles from U.S. markets if they pose a threat to national security.
  • Updates to Federal Motor Vehicle Safety Standards: On May 9, the National Highway Traffic Safety Administration (“NHTSA”) within the Department of Transportation (“DOT”) issued a Final Rule that adopts a new Federal Motor Vehicle Safety Standard to require AEB, including pedestrian AEB, systems and forward collision warning systems on light vehicles weighing under 10,000 pounds manufactured on or after September 1, 2029 (September 1, 2030 for small-volume manufacturers, final-stage manufacturers, and alterers).  The AEB system must “detect and react to an imminent crash with both a lead vehicle or a pedestrian.”

III. Data Privacy & Cybersecurity

Privacy Developments

  • Proposed Comprehensive Federal Privacy Law: As noted above, in June, lawmakers formally introduced the APRA, which, if passed, would create a comprehensive federal privacy regime.  The APRA would apply to “Covered Entities,” which are defined as “any entity that determines the purposes and means of collecting, processing, retaining, or transferring covered data” and is subject to the FTC Act, is a common carrier, or is a nonprofit.  Covered entities do not include government entities and their service providers, specified small businesses, and certain nonprofits.
  • National Security & Privacy: Also in April, the President signed the Protecting Americans’ Data from Foreign Adversaries Act of 2024 (“PADFAA”) into law.  Under the law, data brokers are prohibited from selling, transferring, or providing access to Americans’ “sensitive data” to certain foreign adversaries or entities controlled by foreign adversaries, including identifiers such as social security numbers, geolocation data, data about minors, biometric information, private communications, and information identifying an individual’s online activities over time and across websites or online services.  Separately, the President enacted legislation that reauthorizes the Foreign Intelligence Surveillance Act section 702.  The law permits the U.S. government to collect without a warrant the communications of non-Americans located outside the country to gather foreign intelligence.
  • Health Data & Privacy: In April, HHS published a final rule that modifies the Standards for Privacy of Individually Identifiable Health Information under the Health Insurance Portability and Accountability Act (“HIPAA”) regarding protected health information concerning reproductive health.  Relatedly, the FTC also voted 3-2 to issue a final rule that expands the scope of the Health Breach Notification Rule (“HBNR”) to apply to health apps and similar technologies and broadens what constitutes a breach of security, among other updates. 
  • New State Privacy Laws: MarylandMinnesotaNebraska, and Rhode Island became the latest states to enact comprehensive privacy legislation, joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, Delaware, New Jersey, New Hampshire, and Kentucky.  In addition, Alabama enacted a new genetic privacy law, and Colorado and Illinois amended existing privacy laws.

Cybersecurity Developments

  • CIRCIA: On July 3, the U.S. Cybersecurity and Infrastructure Security Agency closed the public comment period for the NPRM related to the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”).  The final rule, expected in September 2025, will significantly alter the landscape for federal cyber incident reporting notifications, consistent with the Administration’s whole-of-government effort to bolster the nation’s cybersecurity.  
  • National Cybersecurity Strategy Implementation Plan: In May, the Administration added 65 new initiatives to the National Cybersecurity Strategy Implementation Plan.

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (D-CA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Photo of Phillip Hill Phillip Hill

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full spectrum of litigation, transactions, counseling, legislation, and regulation. He regularly represents clients in federal and state court, as well as before the U.S. Copyright Royalty Board, Copyright Office, Patent & Trademark Office, and Trademark Trial & Appeal Board.

Through his work at the firm and prior industry and in-house experience, Phillip has developed a deep understanding of his clients’ industries and regularly advises on cutting-edge topics like generative artificial intelligence, the metaverse, and NFTs. Phillip has been recognized as one of Billboard as a Top Music Lawyers.

In addition to his full-time legal practice, Phillip serves as Chair of the ABA Music and Performing Arts Committee, frequently speaks on emerging trends, is active in educational efforts, and publishes regularly.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance.

With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.

Prior to joining Covington, Olivia was a fellow at the University of Michigan Veterans Legal Clinic, where she gained valuable experience as the lead attorney successfully representing clients at case evaluations, mediations, and motion hearings. At Michigan Law, Olivia served as Online Editor of the Michigan Journal of Gender and Law, president of the Trial Advocacy Society, and president of the Michigan Law Mock Trial Team. She excelled in national mock trial competitions, earning two Medals for Excellence in Advocacy from the American College of Trial Lawyers and being selected as one of the top sixteen advocates in the country for an elite, invitation-only mock trial tournament.

Photo of Shayan Karbassi Shayan Karbassi

Shayan Karbassi is an associate in the firm’s Washington, DC office. He represents and advises clients on a range of cybersecurity and national security issues. As a part of his cybersecurity practice, Shayan assists clients with cyber and data security incident response and…

Shayan Karbassi is an associate in the firm’s Washington, DC office. He represents and advises clients on a range of cybersecurity and national security issues. As a part of his cybersecurity practice, Shayan assists clients with cyber and data security incident response and preparedness, government and internal investigations, and regulatory compliance. He also regularly advises clients with respect to risks stemming from U.S. criminal and civil anti-terrorism laws and other national security issues, to include investigating allegations of terrorism-financing and litigating Anti-Terrorism Act claims.

Shayan maintains an active pro bono litigation practice with a focus on human rights, freedom of information, and free media issues.

Prior to joining the firm, Shayan worked in the U.S. national security community.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.

Photo of Jemie Fofanah Jemie Fofanah

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal…

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal defense and family law.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

Photo of Lauren Gerber Lauren Gerber

Lauren Gerber is an experienced litigator focused on product liability and mass tort defense and complex civil litigation across technology and pharmaceutical industries.

Lauren has represented clients at all stages of litigation, including fact and expert discovery, dispositive motions, as well as pre-trial…

Lauren Gerber is an experienced litigator focused on product liability and mass tort defense and complex civil litigation across technology and pharmaceutical industries.

Lauren has represented clients at all stages of litigation, including fact and expert discovery, dispositive motions, as well as pre-trial Daubert motions and motions in limine. She also has experience representing clients preparing for trial in patent, insurance recovery, and employment discrimination cases in federal and state court.

Lauren has tried multiple cases to verdict, including the pro bono representation of a defendant charged with first degree murder. Lauren has also represented dozens of children and caregivers in D.C. Superior Court at trial and in evidentiary hearings during a six-month full-time rotation at the Children’s Law Center, DC’s largest non-profit legal services provider.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of Zoe Kaiser Zoe Kaiser

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations, Copyright and Trademark Litigation, and Class Actions Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an…

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations, Copyright and Trademark Litigation, and Class Actions Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro bono practice, focusing on media freedom.

Photo of Madeleine Dolan Madeleine Dolan

Madeleine (Maddie) Dolan is a litigation associate in the Washington, DC office. She is a member of the Product Liability and Mass Torts litigation group and has an active pro bono criminal defense practice.

Maddie is a stand-up litigator, having first-chaired at a…

Madeleine (Maddie) Dolan is a litigation associate in the Washington, DC office. She is a member of the Product Liability and Mass Torts litigation group and has an active pro bono criminal defense practice.

Maddie is a stand-up litigator, having first-chaired at a pro bono trial as a member of a team that secured their client’s acquittal on first-degree murder charges; she has also taken a deposition in a commercial litigation matter. She also has extensive experience drafting dispositive motions, leading document reviews, developing expert reports, and preparing for depositions and trial.

Prior to joining Covington, Maddie served as a law clerk to U.S. District Judge Mark R. Hornak of the Western District of Pennsylvania in Pittsburgh, PA. She also previously worked as a consultant and strategic communications director, managing marketing campaigns for federal government agencies.