On July 9, 2024, the FTC and California Attorney General settled a case against NGL Labs (“NGL”) and two of its co-founders. NGL Labs’ app, “NGL: ask me anything,” allows users to receive anonymous messages from their friends and social media followers. The complaint alleged violations of the FTC Act, the Restore Online Shoppers’ Confidence Act (ROSCA), the Children’s Online Privacy Protection Act (COPPA), and California laws prohibiting deceptive advertising and prohibiting unfair and deceptive business practices.

FTC Act & California Law Violations. The complaint alleged that once a user posted a prompt inviting anonymous messages, they would receive automated messages that appeared to come from their contacts, but in reality, came from NGL itself. The complaint also alleged that the company encouraged users to purchase NGL Pro (a paid version of the app) to learn the identity of the people who sent them anonymous messages, but that once consumers upgraded to NGL Pro, they did not actually learn the senders’ identities. Instead, the complaint alleged that they were sent vague, and sometimes false, hints about the senders’ identities, such as the city the sender may live in or the time the message was sent.

The complaint also alleged that NGL claimed to use “world class AI content moderation” to filter out harmful language and bullying, including by recognizing inappropriate use of emojis. In reality, however, the company was purportedly aware that harmful language and bullying were commonplace on the app and its technology did not filter out such behavior.

COPPA Violations. The FTC alleged that NGL obtained actual knowledge of the age of users under 13 through complaints from parents of children who use the app. NGL allegedly retained the information of users under 13 after such complaints without providing required notices or obtaining consent. Notably, the complaint also alleged that the company was aware that numerous children used the app, however, constructive knowledge is insufficient to establish liability under COPPA.

ROSCA Violations. NGL allegedly (1) failed to clearly disclose that the NGL Pro subscription would charge users on a recurring basis, by disclosing this fact only in small, off-white text that “pro renews for $9.99/week;” and (2) misrepresented what customers would receive by subscribing (i.e., by misleading consumers into thinking they could learn who sent them messages by subscribing). NGL also allegedly failed to obtain users’ express informed consent before charging their financial accounts.

Ordered Relief. The order prohibits the company from misrepresenting the capabilities of its AI technology, including its ability to filter out cyberbullying. The company also may not misrepresent that any message received through their app was sent by a live person or that a user will be able to see the identity of those who send them messages.

Even though COPPA’s requirements apply to users under the age of 13 and there is no requirement to age gate, the order requires as fencing-in relief that NGL implement a neutral age gate that will prevent those under 18 from accessing the app. For existing users, NGL must delete personal information unless the user indicates they are over 13 or NGL obtains parental consent to retain the data.

Additionally, the order incorporates a variety of requirements that the FTC has proposed in the proposed revised Negative Option Rule. For example, the company is prohibited from misrepresenting any material fact related to the transaction, including any fact related to the underlying good or service. As in the proposed Negative Option Rule, the order also requires that NGL disclose all material facts related to the Negative Option Feature immediately adjacent to the means of recording the customer’s consent – with the exception of cancellation instructions, which need not be immediately adjacent. NGL also must obtain express informed consent to the Negative Option Feature separately from any other portion of the transaction, through a check box or substantially similar method. The order also incorporates cancellation requirements, such as that cancellation must be available in the same medium as sign-up, even though the complaint does not allege a violation of ROSCA’s cancellation provision. Furthermore, the company must provide users a confirmation email of the purchase, as well as reminders as to the frequency and amount of the charge and how they may cancel.

Finally, the company must pay $4.5 million to the FTC and $500,000 in civil penalties to the state of California.

By Madelaine Harrington & Marty Hansen on July 17, 2024

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (“GPAI”) models.

The AI Act’s regulations will take effect in different stages.  Rules regarding prohibited practices will apply as of 2 February 2025; obligations on GPAI models will apply as of 2 August 2025; and both transparency obligations and obligations on high-risk AI systems will apply as of 2 August 2026.  That said, there are exceptions for high-risk AI systems and GPAI models already placed on the market:  

  • for most high-risk AI systems that have been placed on the market or put into service in the EU before 2 August 2026, the AI Act will apply only if those systems are subject to “significant changes” in their designs after that date; and
  • for GPAI models that have been placed on the market in the EU before 2 August 2025, the AI Act’s rules will not apply to them until 2 August 2027.

The AI Act requires the Commission, among other things, to develop guidelines or adopt secondary legislation on: practical implementations of the Act, including on the definition of an AI system; prohibited practices; transparency obligations; obligations applicable to high-risk AI systems and responsibilities along the AI value chain; and the relationship of the AI Act with the Union harmonisation legislation set out in Annex I of the AI Act.  The Commission has indicated that both the guidelines on the practical implementation of the AI system definition and those on prohibited practices will be published in the next six months. 

The Covington team has monitored the AI Act since its inception, and it has published numerous blogs the substance of the AI Act and its progress through the legislative process:

If you have questions about the AI Act, or other tech regulatory matters, we are happy to assist with any queries.

On July 9, 2024, the Federal Trade Commission (“FTC”) voted 4-1 (with Commissioner Melissa Holyoak dissenting) to release an Interim Staff Report (the “Interim Report”) entitled: Pharmacy Benefit Managers: The Powerful Middlemen Inflating Drug Costs and Squeezing Main Street Pharmacies. The Interim Report describes what FTC staff has uncovered to date during a two-year investigation of the country’s six largest pharmacy benefit managers (“PBMs”). The agency claims that vertical integration and market consolidation have allowed a few PBMs to exert power over drugs and consumer prices as well as unaffiliated pharmacies. The Interim Report also attempts to explain some of the complexities in how PBMs operate within the healthcare industry that may lead to high drug costs, including the use of specialty prescription designations, steering mechanisms, and preferential reimbursement rates for PBM-affiliated pharmacies. While the Interim Report states that it principally focuses on PBMs’ relationships with pharmacies rather than drug manufacturers, it includes a discussion about rebate contracts between drug manufacturers and PBMs that the report suggests may impede access to generic and biosimilars.

Read our client alert here.

On July 10, 2024, the U.S. Senate passed the Stopping Harmful Image Exploitation and Limiting Distribution (“SHIELD”) Act, which would criminalize the distribution of private sexually explicit or nude images online.  

Specifically, the legislation makes it unlawful to knowingly distribute a private intimate visual depiction of an individual that is obtained or created under circumstances in which the actor knew or reasonably should have known the individual depicted had a reasonable expectation of privacy, and where the distribution causes or is intended to cause harm to the individual depicted.  The bill would further criminalize the knowing distribution of a visual depiction of a nude minor with intent to abuse, humiliate, harass, or degrade the minor, or to arouse or gratify the sexual desire of any person.

There are exemptions from certain distributions of private intimate visual depictions from the legislation’s scope, including distributions that are made “reasonably” and “in good faith” to report unlawful or unsolicited activity or pursuant to a legal, professional, or other lawful obligation, among other exceptions.

Notably, the bill also provides that it does not apply to any provider of a “communications service” with regard to intimate visual depictions that are provided by another information content provider — i.e., a third party on the provider’s service — unless the provider of the communications service “intentionally solicits” or “knowingly and predominantly distributes” such content.  The bill defines a “communications service” to include, among other entities, providers of an “interactive computer service” within scope of Section 230 of the Communications Decency Act.

The bill passed the Senate by voice vote and now heads over to the House, where there is already companion legislation with notable bipartisan support (H.R. 3686).  Though legislation is difficult to pass when the House and the Senate are divided, the SHIELD Act has a serious chance of passage before the end of this Congress.

Updated July 15, 2024.  Originally posted July 11, 2024.

On July 8, 2024, the Federal Communications Commission (FCC) and a group of Internet Service Providers, represented by national and regional trade associations, filed supplemental briefs with the U.S. Court of Appeals for the Sixth Circuit in In re MCP NO. 185. On July 15, the Sixth Circuit granted an administrative stay until August 15, 2024 “[t]o provide sufficient opportunity to consider the merits of the motion.”

The Sixth Circuit is considering challenges to the FCC’s Safeguarding and Securing the Open Internet Order (Open Internet Order), which reclassified broadband Internet access service as a telecommunications service under Title II of the Communications Act of 1934, as amended.  The Order was scheduled to take effect on July 22, 2024, but the ISP representatives asked for a stay.  The Sixth Circuit requested that the parties address the implications of the Supreme Court’s decision to overturn the Chevron Doctrine in Loper Bright Enterprises v. Raimondo for the petitioners’ motion to stay enforcement.

In its supplemental brief, the FCC argued that “Loper Bright has no direct relevance here because the [Open Internet Order] under review does not turn or rely on Chevron. Instead, the Order consistently focuses on ascertaining the best reading of the Communications Act using the traditional tools of statutory construction—exactly as Loper Bright instructs.”  The FCC explained that the decision in fact “reinforces the arguments set forth in our stay opposition,” because it “recognizes that a court’s legal interpretations may properly be informed by an agency’s expert assessment of predicate factual and technical issues within the agency’s specialized knowledge and expertise, as well as by roughly contemporaneous understandings of a statute by those closely familiar with it.”  The FCC also argued that “Loper Bright does not address petitioners’ failure to establish that they will suffer imminent and irreparable harm during the time it takes to decide this appeal,” which would be required for them to be granted a stay pending review.

In their supplemental brief, Ohio Cable Telecommunications Association and other petitioners argued the Sixth Circuit should stay the Open Internet Order, because they would likely succeed on the merits of their challenge in light of Loper Bright and the FCC’s inconsistent approach to Internet regulation.  First, they argued that the Net Neutrality rules “trigger[] and flunk[] the major-questions doctrine,” because “Court expressly reaffirmed [in Loper Bright] that it expects Congress to delegate authority of ‘deep “economic and political significance”’ ‘”expressly” if at all.’”  The petitioners further argued that “although the Commission has barely invoked Chevron deference,” the decision established that the Open Internet Order is not entitled to such deference.  Third, the petitioners contend that the changing views of the FCC on how to classify broadband Internet access are the opposite of “‘contemporaneous[]’ and ‘consistent’ agency interpretations” that the Supreme Court said may be entitled to greater weight.

The Sixth Circuit is expected to act on the ISP representatives’ request for a stay by July 15. 

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 


Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”

The letter proposes “a handful of specific areas” for revision, including:

  • Refining SB 205’s definition of AI systems to focus on “the most high-risk systems” in order to align with federal measures and frameworks in states with substantial technology sectors.  This goal aligns with the officials’ call for “harmony across any regulatory framework adopted by states” to “limit the burden associated with a multi-state compliance scheme that deters investment and hamstrings small technology firms.”  The officials add that they “remain open to delays in the implementation” of the new law “to ensure such harmonization.”  
  • Narrowing SB 205’s requirements to focus on developers of high-risk systems and avoid regulating “small companies that may deploy AI within third-party software that they use in the ordinary course of business.”  This goal addresses concerns of Colorado businesses that the new law could “inadvertently impose prohibitively high costs” on AI deployers.
  • Shifting from a “proactive disclosure regime” to a “traditional enforcement regime managed by the Attorney General investigating matters after the fact.”  This goal also focuses on protecting Colorado’s small businesses from prohibitively high costs that could deter investment and hamper Colorado’s technology sector.
Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane

Last week, on 4 July 2024, the German Parliament (Bundestag) has passed significant changes to the country’s drug pricing and reimbursement laws. Just six months after the German Federal Health Ministry (BMG) presented a first draft bill for a “Medical Research Act” (Medizinforschungsgesetz or MFG), the German Parliament has now accepted a modified version of that bill. The Medical Research Act mainly amends (1) national laws for clinical trials with drugs and medical devices, (2) rules for ATMPs (3) drug pricing and reimbursement laws (AMNOG) and (4) initiates a re-organization of the regulatory agencies and ethics committees.

In this blog, we take a closer look at the much-discussed changes in the German drug pricing and reimbursement area. We will focus on two key elements:

  • The controversial new feature of “confidential reimbursement prices”; and
  • The new link between drug pricing and local clinical trials which offers pricing incentives for companies that can show that a “relevant part”  of the clinical trials for a new medicine were conducted in Germany.

We had noted in an earlier blog that the German rules for pharmaceutical pricing and reimbursement are among the most complicated legal areas in the entire world of life sciences laws. With the now coming new laws, Germany adds some additional complexity to its system.

1. Background

The discussed changes to the German drug pricing and reimbursement laws are part of the German Government’s new National Pharma Strategy that aims to enhance Germany’s attractiveness as a place for pharmaceutical research, development, and manufacturing. The Government presented an underlying strategy paper in December 2023 and the Medical Research Act is the first legislative implementation step of that strategy. For an overview of this new National Pharma Strategy, we invite you to read our previous blog on this topic.

The Medical Research Act was first presented to stakeholders in late January 2024. For a comprehensive overview of this first draft, please see our earlier earlier blog. After an initial consultation, the Government revised the draft and initiated the legislative process at the end of May 2024. Overall, the Government has deployed an unusually fast pace and was successful with its plan to get the bill through Parliament before the summer break.

Continue Reading Germany amends drug pricing and reimbursement laws with “Medical Research Act” – Drug pricing becomes intertwined with local clinical research expectations

On June 18, 2024, Louisiana enacted HB 577, prohibiting “social media platforms” with more than 1 million users globally from displaying targeted advertising to Louisiana users that the platform has actual knowledge are under 18 years of age and from selling the sensitive personal data of such users. The law amends the effective date of the state social media law, the Louisiana Secure Online Child Interaction and Age Limitation Act (“the SOCIAL Act”), to July 1, 2025. HB 577 also will take effect on July 1, 2025. This post summarizes the law’s key provisions.

  • Social Media Platform Definition: The law applies the definition of “social media platform” from the SOCIAL Act, which defines social media platform as a public or semipublic internet-based service or app with users in Louisiana and permits users to do all the following:
    • Connect users to allow users to interact socially with each other within the service or app;
    • Construct a public or semipublic profile to sign into and use the service or app;
    • Populate a list of other users with whom an individual shares a social connection within the system, including subscribing to another user’s content; and
    • Create or post content viewable by others, such as on message boards, in chat rooms, or on a main feed that presents the user with content generated by other users.
  • Targeted Advertising Definition: Targeted advertising means displaying an advertisement to a user where the advertisement is selected based on personal data obtained from the user’s activities over time and across non-affiliated websites or online apps to predict the user’s preferences or interests.
    • Exceptions: Targeted advertising does not include:
      • (i) Advertising based on activities within a controller’s own website or app;
      • (ii) Advertising based on the context of a user’s current search, website visit, or app;
      • (iii) Advertising directed to a user in response to their request for information, products, services, or feedback; or
      • (iv) Processing personal data solely to:
        • measure or report on advertising performance, reach, or frequency, or
        • prevent fraud and abuse.
  • Data Processing Liability: Under HB 577, social media platforms are not liable for data processing to reasonably determine whether a user is a Louisiana resident or for an erroneous residency determination. Additionally, if a covered social media platform conducts age estimation, it is not liable for:
    • Data processing undertaken during the period when it is estimating age;
    • An erroneous estimation; or
    • Data processing in the absence of reasonable evidence the user is a minor.
  • Enforcement: The Louisiana Attorney General has the exclusive authority to enforce the law. Social media platforms have 45 days to cure violations after receiving notice from the Louisiana Attorney General. The law authorizes civil penalties of up to $10,000 per violation against social media platforms and up to $5,000 against any person for each related administrative or court order violation.

July 10, 2024, Covington Alert

On July 3, 2024, Judge Ada Brown of the United States District Court for the Northern District of Texas granted the motions for a preliminary injunction—filed by Ryan LLC (“Ryan”) and several trade associations, including the U.S. Chamber of Commerce (“Chamber”)—to prevent the FTC’s rule banning non-compete clauses from going into effect, but the court’s order only applies to the named plaintiffs (i.e., it is not a nationwide injunction). The court has indicated that it will issue a final order on the merits by August 30, 2024, just a few days before the FTC’s rule is scheduled to go into effect on September 4. It is possible that Judge Brown enjoins the non-compete ban nationwide in her final order.


In April, the FTC issued a final rule banning almost all non-competes with U.S. workers, with narrow exceptions, pursuant to its claimed authority to issue competition-related rules under Sections 5 and 6(g) of the FTC Act. That same day, Ryan challenged the FTC’s rule and, shortly thereafter, filed a motion to stay and preliminarily enjoin the rule, arguing that the FTC has no statutory authority to promulgate the rule, that the rule is the product of an unconstitutional exercise of power, and that the FTC’s acts were arbitrary and capricious. The Chamber and other trade groups intervened as plaintiffs on May 8, making substantially the same arguments.

The Order

In its Order, the court found that the Plaintiffs had demonstrated a likelihood of success that (1) the FTC does not have the statutory authority to engage in competition-related rulemaking, (2) the non-compete rule is arbitrary and capricious, and (3) the plaintiffs and intervenors had satisfied the standard to obtain injunctive relief.

Continue Reading Texas District Court Enjoins FTC’s Rule Banning Non-Compete Clauses

On 31 May 2024, the European Commission (“Commission”) adopted an amendment to its Regional aid Guidelines (“RAG”), allowing EU Member States to grant higher amounts of aid to investment projects falling into the Strategic Technologies for Europe Platform’s (“STEP”) objectives in disadvantaged areas of the EU. STEP is an EU initiative designed to boost the EU’s industrial competitiveness and reinforce EU sovereignty by supporting critical and emerging strategic technologies and their respective value chains.

Key takeaways

  • In the EU, large businesses can only receive State aid from Member States for their large investment projects (“LIPs”) in production facilities if their projects take place in disadvantaged areas of the EU. The conditions to access such State support and the maximum aid amount are laid down in the RAG.
  • STEP’s objectives are to support the development and the manufacturing of clean tech, digital technologies, and bio-tech.
  • The amendment to the RAG allows Member States to grant large businesses higher amounts of aid for their LIPs where they contribute to the STEP objectives.

Regional aid

Aid to large businesses pursuing LIPs is generally considered unnecessary and highly distortive because these businesses already have access to capital and are a significant presence on the market. Such aid can, in principle, only be authorised by the Commission under strict conditions and if it supports an initial investment in new production facilities, output diversification into new products, or a fundamental change in a production process.

Continue Reading The Commission amends regional aid rules to foster support for strategic technology projects