With the 2024 election rapidly approaching, the Biden Administration must race to finalize proposed agency actions as early as mid-May to avoid facing possible nullification if the Republican Party controls both chambers of Congress and the White House next year. 

The Congressional Review Act (CRA) allows Congress to overturn rules issued by the Executive Branch by enacting a joint resolution of disapproval that cancels the rule and prohibits the agency from issuing a rule that is “substantially the same.”  One of the CRA’s most unique features—a 60-day “lookback period”—allows the next Congress 60 days to review rules issued near the end of the last Congress.  This means that the Administration must finalize and publish certain rules long before Election Day to avoid being eligible for CRA review in the new year.

Overview of the CRA

The CRA requires federal agencies to submit all final rules to Congress before the rule may take effect.  It provides the House with 60 legislative days and the Senate with 60 session days to introduce a joint resolution of disapproval to overturn the rule.  This 60-day period counts every calendar day, including weekends and holidays, but excludes days that either chamber is out of session for more than three days pursuant to an adjournment resolution.  In the Senate, a joint resolution of disapproval receives only limited debate and may not be filibustered.  Moreover, if it has been more than 20 calendar days since Congress received a final rule and a joint resolution has not been reported out of the appropriate committee, a group of 30 Senators can file a petition to force a floor vote on the petition.   

If a CRA resolution receives a simple majority in both chambers and is signed by the President, or if Congress overrides a presidential veto, the rule cannot go into effect and is treated “as though such rule had never taken effect.”[1]  The agency is also barred from reissuing a rule that is “substantially the same,” unless authorized by future law.[2]    

Election Year Threat: CRA Lookback Period

These procedures pose special challenges for federal agencies in an election year.  If a rule is submitted to Congress within 60 days before adjournment, the CRA’s lookback provision allows the 60-day timeline to introduce a CRA resolution to start over in the next session of Congress.

This procedure ultimately requires the current administration to assess the threat of a CRA resolution against certain rules and determine whether to issue the rule safely before the deadline or risk a potential CRA challenge. 

Continue Reading Congressional Review Act Threat Looms Over Biden Administration Rulemakings

On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.

This legislation would require developers of large AI models to implement certain safeguards before training and deploying those models, and to report safety incidents involving AI technologies.  The bill would give the California Attorney General civil enforcement authority over violations and establish a new “Frontier Model Division” within the Department of Technology to aid enforcement. 

At the hearing, witnesses—including Encode Justice, the Center for AI Safety, and Economic Security California—and legislators praised the bill’s goal of regulating large AI models while also expressing concerns about the feasibility of enforcement and potential effects on AI innovation.  The Chamber of Progress and California Chamber of Commerce (CalChamber) testified in opposition to the bill.  A coalition of advocacy and industry groups, led by CalChamber, has also signed a letter opposing the bill.

Covered Models.  Mirroring the White House’s 2023 Executive Order, SB 1047 would regulate developers of “covered models” trained on computers with processing power above certain thresholds, while also covering models of “similar or greater performance.”  Developers would also be prohibited from training or deploying a covered model that presents an unreasonable risk of “critical harm,” such as the creation or use of weapons of mass destruction, cybersecurity attacks causing catastrophic damages (greater than $500 million), activities undertaken by AI that cause mass casualties or catastrophic damages (greater than $500 million) and that would be criminal conduct if committed by humans, or other severe threats to public safety.

AI Developer Pre-Training Requirements.  SB 1047 would establish a set of requirements for developers of covered models that apply before a covered model is trained, including:  

  • Positive Safety Determinations.  Developers would be required to assess whether a model will have lower performance than covered models and lacks “hazardous capabilities.”  Models with such determinations are exempt from the bill’s requirements.
  • Protections & Safeguards.  Developers would be required to implement cybersecurity protections against misuse, ensure models can be fully shutdown, and follow industry best practices and NIST and Frontier Model Division guidance.
  • Safety & Security Protocols.  Developers would be required to implement, for each covered model, a “safety and security protocol” with assurances of safeguards, the requirements that apply to the developer, and procedures to test the model’s safety.

AI Developer Pre-Deployment Requirements.  After training a covered model, SB 1047 would require developers to perform “capability testing” to assess whether a positive safety determination is warranted.  If not, developers would be required to implement safeguards that prevent harmful uses and ensure a model’s actions and “resulting critical harms can be accurately and reliably attributed” to the model and responsible users.

AI Developer Ongoing Requirements.  SB 1047 would also establish ongoing obligations for developers, including annual reviews of safety and security protocols, annual certifications of compliance to the Frontier Model Division, periodic reviews of procedures, policies, and safeguards, and reporting of “AI safety incidents” within 72 hours of learning of the incident.

Whistleblower Protections.  SB 1047 would prohibit developers from preventing employees from disclosing information to the California Attorney General indicating a developer’s noncompliance, or from retaliating against employees who do so. 

SB 1047 has a long way to go before becoming law.  Should it be enacted, however, it could—like California’s comprehensive privacy legislation before it—become the de facto standard for AI regulation in the United States, filling the void created in the absence of comprehensive federal AI legislation.  We are closely monitoring these and related state AI developments as they unfold.  A summary of key themes in recent state AI bills is available here, along with our overview of recent state synthetic media and generative AI legislation here.  We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs.

The Northern District of Illinois recently denied certification to several proposed classes of purchasers of a seizure drug called Acthar in City of Rockford v. Mallinckrodt ARD, Inc., No. 3:17-cv-50107, 2024 WL 1363544 (Mar. 29, 2024).  Class plaintiffs had alleged that defendant Express Scripts, a drug distributor, conspired with Mallinckrodt, a drug manufacturer, to raise the price of Acthar through an exclusive distribution arrangement.  In denying certification to the damages classes, the court determined that plaintiffs had not met Rule 23(b)(3)’s predominance standard because they lacked a reliable economic model showing that damages were “capable of measurement on a classwide basis,” as required by Comcast Corp. v. Behrend, 569 U.S. 27, 34 (2013).

Plaintiffs’ only classwide evidence on damages was a statistical model of drug prices offered by an academic.  The court rejected this evidence as unreliable under Rule 702 due to unsupported assumptions in the expert’s model of Acthar prices in the hypothetical world.

The expert assumed that but-for defendant’s conduct, Acthar’s prices would have moved in a manner similar to the price level of drugs across the pharmaceutical industry as a whole—as measured by the industry’s Producer Price Index (“PPI”).  Yet the expert provided only “conclusory assertions[s]” to justify this central assumption: that the PPI’s drug industry-average prices were a reliable “yardstick” for Acthar’s price.  Similarly, the court agreed with critiques offered by defendant’s expert that plaintiffs’ expert had failed to consider other economic factors, such as differences or changes in market shares and structures between Acthar and other drugs, that could have caused prices for Acthar to diverge from industry averages, irrespective of defendant’s conduct.  The court thus concluded that the expert’s model failed to isolate the effect of any allegedly unlawful conduct from other factors that might have affected prices for Acthar.

Because the expert’s model was plaintiffs’ only class-wide evidence of damages, the court determined that individualized questions of damages would “inevitably overwhelm questions common to the class” in violation of Comcast and the Rule 23 predominance standard.  The case is yet another example of courts rigorously reviewing expert testimony at the class certification stage in major antitrust litigation to determine whether the prerequisites of class certification are satisfied.

Last summer, the antitrust agencies proposed sweeping changes to the Hart-Scott-Rodino (“HSR”) Act premerger notification form and associated rules. Covered in detail here, the proposed changes would significantly increase the time, burden, and costs on merging parties to prepare an HSR filing. The public comment period ended on September 27, 2023. Since then, the agencies have given little indication what changes would be made in response to the comments or when the proposed rules would be finalized.

Yesterday, DOJ antitrust officials provided updates on both fronts during the American Bar Association’s annual Antitrust Spring Meeting. Speaking on a panel with others from DOJ’s Antitrust Division, Andrew Forman (Deputy Assistant Attorney General) said the new HSR rules will be finalized “in a matter of weeks, as opposed to months.” He noted, however, that his prediction was uncertain, in part because the FTC leads the HSR rulemaking process.

DOJ officials also previewed the forthcoming rules’ content. Forman expects that, compared to the current proposal, the final rules will have “material differences” that reduce the burden on merging parties. Speaking on a separate panel, Suzanne Morris (Deputy Director, Civil Enforcement Operations) echoed these comments. She explained that the agencies are reconsidering whether they need certain types of information to assess deals and will revise the proposed rules to alleviate burdens “as appropriate.”

Parties should still expect significant differences between the new HSR rules and the rules currently in place. Forman criticized the existing rules as outdated, saying they are both overinclusive—requiring more information than necessary from the parties—and underinclusive—not requiring information necessary to assess antitrust risk.

You can learn about other recent developments in merger enforcement at Covington’s content hub on the topic.

On April 2, 2024, the FCC released a Report and Order (the “Order”) and Further Notice of Proposed Rulemaking (the “Further Notice”) approving a rule change on a bipartisan, unanimous basis to allow radio broadcasters to use FM boosters to direct hyper-local programming for a portion of each hour at specific geographic areas rather than to do what radio stations have done for a century, which is sending the same broadcast stream over the entire market.  Prior to the rule change, radio stations could only use FM boosters to retransmit the main signal to areas not well covered by the primary antenna. 

FCC Commissioner Geoffrey Starks underscored the innovation this rule change will enable, explaining, “[i]t’s about time we gave these broadcasters – on a voluntary basis – the opportunity to try out their plans.  What they have in mind no doubt presents a fresh way of thinking about FM.”  Commissioner Brendan Carr similarly celebrated that “the Order immediately opens up new opportunities for all FM radio broadcasters which operate in an intensely competitive media environment.”  Small business advocates and civil rights leaders also voiced support for the change.  The rule change was initiated by a Petition for Rulemaking filed by GeoBroadcast Solutions (GBS) with Covington’s support in March of 2020.

The Order

The Order creates a new category of boosters used to originate content, “Program Originating FM Booster Stations,” which are allowed to originate content for three minutes each hour.  The Commission determined that using boosters in this way would be in the public interest and would not create significant interference issues.  The Commission dismissed concerns about the competitive effect of program originating boosters and concluded that extensive testing conducted by GBS demonstrated limited risk of signal interference.  While some collateral decisions are left to be resolved in the Further Notice, the Commission adopted an interim process allowing FM broadcasters to begin deploying this technology when the Order becomes effective thirty days after publication in the Federal Register.  Under this interim process, program originating boosters will be licensed for one year, with a presumption of renewal, and broadcasters can apply for up to twenty-five boosters.

The Further Notice

The Further Notice seeks comment on some related issues and possible rule changes as the Commission establishes permanent licensing for Program Originating Boosters, including questions regarding broadcast political files, notice obligations, patent issues, emergency alert system processes, and similar issues.  Comments addressing these topics will be due thirty days after the date the Further Notice is published in the Federal Register and reply comments will be due thirty days thereafter.

On April 2, the Enforcement Division of the California Privacy Protection Agency issued its first Enforcement Advisory, titled “Applying Data Minimization to Consumer Requests.”  The Advisory highlights certain provisions of and regulations promulgated under the California Consumer Privacy Act (“CCPA”) that “reflect the concept of data minimization” and provides two examples that illustrate how businesses may apply data minimization principles in certain scenarios.

First, the Advisory includes the CCPA’s data minimization principle reflected in Civil Code § 1798.100(c): “[a] business’ collection, use, retention, and sharing of a consumer’s personal information shall be reasonably necessary and proportionate” to achieve the purpose for which it was collected or processed, or another, compatible and disclosed purpose. 

The Advisory notes that the regulations “underscor[e] this principle” by explaining that whether a business’s data practices are “reasonably necessary and proportionate” within the meaning of the statute is based on (1) “[t]he minimum personal information that is necessary to achieve the purpose identified,” (2) “possible negative impacts to consumers posed by the business’s collection or processing of the personal information,” and (3) “the existence of additional safeguards for the personal information” to address those possible negative impacts.  The Advisory next highlights other CCPA regulations that “reflect the concept of data minimization.”  For example, the Advisory identifies certain regulations that prohibit requiring consumers to provide “additional information beyond what is necessary” to exercise certain rights under the CCPA, including 11 CCR § 7025(c)(2) concerning opt-out preference signals.  

The Advisory also describes two hypothetical “illustrative scenarios in which a business might encounter the data minimization principle.”  The first scenario contemplates a business’s response to a consumer’s request to opt out of sale/sharing, and the second a business’s process for verifying a consumer’s identity with respect to a request to delete.  In both, the Advisory provides examples of questions businesses could consider to apply data minimization principles to the scenarios.  These questions reflect the three bases set out in the regulations to determine whether a business’s data practices are “reasonably necessary and proportionate.” as discussed above.  For example, per the Advisory, a business verifying a deletion request could consider: “We already have certain personal information from this consumer.  Do we need to ask for more personal information than we already have?”

Finally, the Advisory explains that Enforcement Advisories are intended to “provide[ ] additional detail about principles of the CCPA and highlight[ ] observations of non-compliance to deter violations.”  They do not “implement, interpret, or make specific the law enforced or administered by the California Privacy Protection Agency, establish substantive policy or rights, constitute legal advice, or reflect the views of the Agency’s Board.”  The Agency further states that adherence to guidance in an advisory is not a safe harbor from potential enforcement actions, which are assessed on a case-by-case basis. 

On March 28, 2024, the U.S. Department of Justice and the Federal Trade Commission jointly filed a Statement of Interest on behalf of the United States in the case of Cornish-Adebiyi v. Caesars Entertainment, 1:23-CV-02536 (D.N.J. Mar. 28, 2024). 

In the Statement, the agencies express their disagreement with two legal arguments asserted by the Cornish-Adebiyi defendants in their motion to dismiss.  First, the agencies argue that, although communications between competitors “can be highly probative of an agreement,” there is “no rule requiring proof of such communications” in order to prove the existence of an agreement for purposes of Section 1.  In the agencies’ view, Section 1 claims may also be demonstrated by “actions alone,” as in the case of a tacit agreement, or an agreement facilitated by communications between competitors and a central intermediary, as in the case of a “hub-and-spoke” conspiracy.

Additionally, and in response to the Cornish-Adebiyi defendants’ argument that the alleged conduct does not constitute per se unlawful price fixing if pricing recommendations are non-binding, the agencies argue that per se illegal price fixing includes agreements to fix the “starting point of prices,” regardless of “how often [an agreement] is followed.”

The Statement, filed with the District of New Jersey, follows similar Statements of Interest submitted in the past year by the United States in two other ongoing algorithmic price fixing cases, In re: RealPage, 3:23-MD-03071 (M.D. Tenn. Nov. 15, 2023), and Duffy v. Yardi, 2:23-cv-01391 (W.D. Wash. Mar. 1, 2024).  Last December, the district court in RealPage did not accept plaintiffs’ and the agencies’ argument for per se treatment, instead dismissing one of the two complaints and allowing the other to proceed only under a Rule of Reason theory.  In re RealPage, Inc., Rental Software Antitrust Litig. (No. II), 3:23-MD-03071, 2023 WL 9004806, at *30 (M.D. Tenn. Dec. 28, 2023).

We will continue to update you on meaningful developments related to algorithmic price fixing here and across our blogs.

A new post on the Covington Inside Privacy blog discusses remarks by California Privacy Protection Agency (CPPA) Executive Director Ashkan Soltani at the International Association of Privacy Professionals’ global privacy conference last week.  The remarks covered the CPPA’s priorities for rulemaking and administrative enforcement of the California Consumer Privacy Act, including with respect to connected vehicles and artificial intelligence.  You can read the post here.

On January 17, 2024, the European Data Protection Board (“EDPB”) published its report on the 2023 Coordinated Enforcement Framework (“CEF”), which examines the current landscape and obstacles faced by data protection officers (“DPOs”) across the EU.  In particular, the report provides a snapshot of the findings of each supervisory authority (“SA”) on the role of DPOs, with a particular focus on (i) the challenges DPOs face and (ii) recommendations to mitigate and address these obstacles in light of the GDPR.  This blog post summarizes the key findings of the EDPB’s 2023 CEF report.


The 2023 CEF was conducted by the EU SAs, each of whom sent a selection controllers and processors  in their jurisdictions a pre-agreed questionnaire, in some cases slightly modified from the original, to be completed by their respective DPOs.  In a few cases, questionnaires were completed by a member of an organization’s senior management (instead of a DPO).

Key Takeaways

The report highlights the following key findings and makes the following recommendations:

  • Insufficient transparency on DPOs.  Several SAs noted that a number of organizations did not always publicly disclose or provide their SAs with contact information for their DPOs (e.g., the DPO’s email address; there is no need to include the DPO’s name), which may contravene a data subject’s right to information and ability to access their personal data.
    • SAs’ key recommendations:  Organizations should ensure that a DPO’s contact details are made available to the public to enable effective communication with data subjects and SAs.  They will also need to maintain up-to-date contact information and communicate any changes to data subjects (e.g., in their privacy notice).
  • Insufficient resources allocated to DPOs.  Several SAs noted that a number of DPOs did not have adequate resources to perform their tasks effectively.
    • SAs’ key recommendations:  Organizations should ensure that adequate financial and human resources are provided to DPOs, including: (i) completing a survey to determine the organization’s needs, particularly in terms of personnel required to assist the DPO and the type of matters the DPO is or should be involved in; (ii) allocating an independent budget to DPOs that ensures their autonomy; and (iii) providing internal teams to support the DPO.  The SAs also endorse training to enable staff to stay up-to-date with the latest privacy developments.
  • Insufficient involvement of DPOs in completing privacy-related tasks.  Several SAs noted that a number of DPOs did not always have (i) access to information on matters falling within their remit, including data subject access requests (“DSARs”), data breaches, and so forth; and (ii) information regarding why their organizations may have deviated from their recommendations.
    • SAs’ key recommendations:  DPOs should always be consulted on questions related to data privacy.  To this end, organizations should develop and implement internal policies to determine when a DPO’s involvement is necessary (e.g., DSAR, data breaches, etc.), as well as coordinate with other key departments (e.g., HR, Compliance, IT, etc.).
  • Insufficient oversight of conflicts of interests, and reporting mechanisms to high-level management.  Several SAs noted that a high number of DPOs responded by noting that they can receive instructions regarding the performance of their tasks and/or may have additional roles in the organization that could pose a conflict (in light of Article 38(3) and (6) of the GDPR and recent CJEU’s judgment on DPOs’ conflicts of interests).
    • SAs’ key recommendations:  Organizations should: (i) raise awareness regarding the DPO’s role and responsibilities; (ii) identify roles that would be incompatible with the function of DPO; and (iii) draw up and circulate internal policies identifying a DPO’s tasks.

What’s next?

Based on the results of the 2023 survey, the EDPB and SAs will develop further guidance and additional tools (e.g., training, workshops, factsheets, etc.).   SAs have also indicated that they may launch investigations or sectoral audits on the basis of the information gleaned through the survey.

*           *           *

Covington’s Data Privacy and Cybersecurity team regularly advises companies on their most challenging compliance issues in the EU and other key markets, including on DPOs’ designation and role and data subjects’ rights.  Our team is happy to assist companies in any questions relating to DPOs, on top of any other privacy or cybersecurity-related questions .

(This blog post was written with the contributions of Diane Valat.)

This is the thirty-fourth in a series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs describes described the actions taken by various government agencies to implement the Cyber EO from June 2021through January 2024.  This blog describes key actions taken to implement the Cyber EO, as well as the U.S. National Cybersecurity Strategy, during February 2024.  It also describes key actions taken during February 2024 to implement President Biden’s Executive Order on Artificial Intelligence (the “AI EO”), particularly its provisions that impact cybersecurity, secure software, and federal government contractors. 

NIST Publishes Cybersecurity Framework 2.0

            On February 26, 2024, the U.S. National Institute of Standards and Technology (“NIST”) published version 2.0 of its Cybersecurity Framework.  The NIST Cybersecurity Framework (“CSF” or “Framework”) provides a taxonomy of high-level cybersecurity outcomes that can be used by any organization, regardless of its size, sector, or relative maturity, to better understand, assess, prioritize, and communicate its cybersecurity efforts.  CSF 2.0 makes some significant changes to the Framework, particularly in the areas of Governance and Cybersecurity Supply Chain Risk Management (“C-SCRM”).  Covington’s Privacy and Cybersecurity group has posted a blog that discusses CSF 2.0 and those changes in greater detail.

NTIA Requests Comment Regarding “Open Weight”

Dual-Use Foundation AI Models

            Also on February 26, the National Telecommunications and Information Administration (“NTIA”) published a request for comments on the risks, benefits, and possible regulation of “dual-use foundation models for which the model weights are widely available.”  Among other questions raised by NTIA in the document are whether the availability of public model weights could pose risks to infrastructure or the defense sector.  NTIA is seeking comments in order to prepare a report that the AI EO requires by July 26, 2024 on the risks and benefits of private companies making the weights of their foundational AI models publicly available.  NTIA’s request for comments notes that “openness” or “wide availability” are terms without clear definition, and that “more information [is] needed to detail the relationship between openness and the wide availability of both model weights and open foundation models more generally.”  NTIA also requests comments on potential regulatory regimes for dual-use foundation models with widely available model weights, as well as the kinds of regulatory structures “that could deal with not only the large scale of these foundation models, but also the declining level of computing resources needed to fine-tune and retrain them.”

Continue Reading February 2024 Developments Under President Biden’s Cybersecurity Executive Order, National Cybersecurity Strategy, and AI Executive Order