Photo of Holly Fechner

Holly Fechner

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of the Covington’s Technology Industry Group and a member of the Covington Political Action Committee board of directors.

Holly works with clients to:

  • Develop compelling public policy strategies
  • Research law and draft legislation and policy
  • Draft testimony, comments, fact sheets, letters and other documents
  • Advocate before Congress and the Executive Branch
  • Form and manage coalitions
  • Develop communications strategies

She is the Executive Director of Invent Together and a visiting lecturer at the Harvard Kennedy School of Government. She serves on the board of directors of the American Constitution Society.

Holly served as Policy Director for Senator Edward M. Kennedy (D-MA) and Chief Labor and Pensions Counsel for the Senate Health, Education, Labor & Pensions Committee.

She received The American Lawyer, "Dealmaker of the Year" award. in 2019. The Hill named her a “Top Lobbyist” from 2013 to the present, and she has been ranked by Chambers USA - America's Leading Business Lawyers from 2012 to the present.

Nearly a year after Senate Majority Leader Chuck Schumer (D-NY) launched the SAFE Innovation Framework for artificial intelligence (AI) with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), the bipartisan group has released a 31-page “Roadmap” for AI policy.  The overarching theme of the Roadmap is “harnessing the full potential of AI while minimizing the risks of AI in the near and long term.”

In contrast to Europe’s approach to regulating AI, the Roadmap does not propose or even contemplate a comprehensive AI law.  Rather, it identifies key themes and areas of agreement and directs the relevant congressional committees of jurisdiction to legislate on key issues.  The Roadmap recommendations are informed by the nine AI Insight Forums that the bipartisan group convened over the last year.

  • Supporting U.S. Innovation in AI.  The Roadmap recommends least $32 billion in funding per year for non-defense AI innovation, and the authors call on the Appropriations Committee to “develop emergency appropriations language to fill the gap between current spending levels and the [National Security Commission on AI (NSCAI)]-recommended level,” suggesting the bipartisan group would like to see Congress increase funding for AI as soon as this year. The funding would cover a host of purposes, such as AI R&D, including AI chip design and manufacture; funding the outstanding CHIPS and Science Act accounts that relate to AI; and AI testing and evaluation at NIST.
    • This pillar also endorses the bipartisan Creating Resources for Every American to Experiment with Artificial Intelligence (CREATE AI) Act (S. 2714), which would broaden nonprofit and academic researchers’ access to AI development resources including computing power, datasets, testbeds, and training through a new National Artificial Intelligence Research Resource.  The Roadmap also supports elements of the Future of AI Innovation Act (S. 4178) related to “grand challenge” funding programs, which aim to accelerate AI development through prize competitions and federal investment initiatives.
    • The bipartisan group recommends including funds for the Department of Defense and DARPA to address national security threats and opportunities in the emergency funding measure.  
  • AI and the Workforce.  The Roadmap recommends committees of jurisdiction consider the impact of AI on U.S. workers and ensure that working Americans benefit from technological progress, including through training programs and by studying the impacts of AI on workers.  Importantly, the bipartisan group recommends legislation to “improve the U.S. immigration system for high-skilled STEM workers.”  The Roadmap does not address benefit programs for displaced workers.

Continue Reading Bipartisan Senate AI Roadmap Released

As the 2024 elections approach and the window for Congress to consider bipartisan comprehensive artificial intelligence (AI) legislation shrinks, California officials are attempting to guard against a generative AI free-for-all—at least with respect to state government use of the rapidly advancing technology—by becoming the largest state to issue rules for state procurement of AI technologies. 

Senate Commerce Committee Chair Maria Cantwell (D-WA) and Senators Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN) recently introduced the Future of AI Innovation Act, a legislative package that addresses key bipartisan priorities to promote AI safety, standardization, and access.  The bill would also advance U.S. leadership in AI by facilitating R&D

With the 2024 election rapidly approaching, the Biden Administration must race to finalize proposed agency actions as early as mid-May to avoid facing possible nullification if the Republican Party controls both chambers of Congress and the White House next year. 

The Congressional Review Act (CRA) allows Congress to overturn rules issued by the Executive Branch by enacting a joint resolution of disapproval that cancels the rule and prohibits the agency from issuing a rule that is “substantially the same.”  One of the CRA’s most unique features—a 60-day “lookback period”—allows the next Congress 60 days to review rules issued near the end of the last Congress.  This means that the Administration must finalize and publish certain rules long before Election Day to avoid being eligible for CRA review in the new year.

Overview of the CRA

The CRA requires federal agencies to submit all final rules to Congress before the rule may take effect.  It provides the House with 60 legislative days and the Senate with 60 session days to introduce a joint resolution of disapproval to overturn the rule.  This 60-day period counts every calendar day, including weekends and holidays, but excludes days that either chamber is out of session for more than three days pursuant to an adjournment resolution.  In the Senate, a joint resolution of disapproval receives only limited debate and may not be filibustered.  Moreover, if it has been more than 20 calendar days since Congress received a final rule and a joint resolution has not been reported out of the appropriate committee, a group of 30 Senators can file a petition to force a floor vote on the petition.   

If a CRA resolution receives a simple majority in both chambers and is signed by the President, or if Congress overrides a presidential veto, the rule cannot go into effect and is treated “as though such rule had never taken effect.”[1]  The agency is also barred from reissuing a rule that is “substantially the same,” unless authorized by future law.[2]    

Election Year Threat: CRA Lookback Period

These procedures pose special challenges for federal agencies in an election year.  If a rule is submitted to Congress within 60 days before adjournment, the CRA’s lookback provision allows the 60-day timeline to introduce a CRA resolution to start over in the next session of Congress.

This procedure ultimately requires the current administration to assess the threat of a CRA resolution against certain rules and determine whether to issue the rule safely before the deadline or risk a potential CRA challenge. Continue Reading Congressional Review Act Threat Looms Over Biden Administration Rulemakings

On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a

On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI.  Rep. Jay Olbernolte (R-CA) will chair the task force, joined by

On December 12, the U.S. House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party (the “Select Committee”) adopted a broad set of policy recommendations intended to reduce the United States’ economic and technological ties with China across a broad swath of the economy.

The Select Committee passed the 53-page report, containing 130 recommendations, on a bipartisan, though not unanimous, voice vote.  The report is organized around three pillars:

  1. “Reset the Terms of Our Economic Relationship with the PRC,” emphasizing the scope of the United States’ strategic dependence on China;
  2. “Stem the Flow of U.S. Capital and Technology Fueling the PRC’s Military Modernization and Human Rights Abuses,” which calls for increasingly hawkish trade and investment-review policies; and
  3. “Invest in Technological Leadership and Build Collective Economic Resilience in Concert with Allies,” focused on strengthening the workforce, critical supply chains, and related capabilities.

The report urges Congress and the Administration to deploy a variety of tools to compete with China, including by building on the Biden Administration’s recent executive orders on artificial intelligence and outbound investment.  With respect to trade, the Select Committee recommends implementing stricter export controls and moving China to a new tariff column, effectively revoking its permanent normal trade relations (PNTR) status.  Furthermore, the report calls for broadly expanding authorities for the Committee on Foreign Investment in the United States (CFIUS), as well as for investments in international economic development to counter China’s efforts to influence the economic affairs of trading partners through its Belt and Road initiative.  The report recommends several steps to protect U.S. innovators from intellectual-property-related abuses and sanction companies in China that threaten U.S. national security.Continue Reading House Select Committee report urges “new path” for economic engagement with China

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

Unless Congress reaches an agreement to keep the lights on, the U.S. government appears headed for a shutdown at midnight on October 1.  As the deadline looms, stakeholders should not let the legislative jockeying overshadow another consequence of a funding lapse: regulatory delay.  Under normal circumstances, federal agencies publish thousands of rules per year, covering agriculture, health care, transportation, financial services, and a host of other issues.  In a shutdown, however, most agency proceedings to develop and issue these regulations would grind to a halt, and a prolonged funding gap would lead to uncertainty for stakeholders, particularly as the 2024 elections approach.  Another consequence is that more regulations could become vulnerable to congressional disapproval under the Congressional Review Act (CRA).

The Administrative Procedure Act (APA) provides that “General notice of proposed rulemaking shall be published in the Federal Register,” and prescribes requirements for the contents of an agency rulemaking notice.  To initiate or finalize a rulemaking, agencies must submit rules to the Office of the Federal Register (OFR)—the National Archives and Records Administration agency that publishes the Federal Register, the government’s daily journal of rules, regulations, and other activities.

When government funding lapses, however, publication of critical rulemaking documents slows.  Under the Antideficiency Act, both an agency seeking to publish documents and OFR are prohibited from spending or obligating funds during a shutdown.  Government agencies are also prohibited from accepting voluntary services for government work—except in cases of “emergencies involving the safety of human life or the protection of property.”

These restrictions significantly curtail the ability of agencies to initiate new rulemakings that don’t qualify for the exception, or to advance rulemakings that began before a shutdown.  Federal employees are not allowed to draft or submit rules for publication, accept or review public comments, or revise or publish final rules while their agency is unfunded. 

Likewise, because the APA and other statutes require certain executive actions to be published in the Federal Register—OFR must also be funded and operational to publish most agency documents, even if the agency publishing the document has not experienced a funding lapse.

To account for this issue, OFR has set forth guidelines in advance of prior shutdowns that detail when agencies—funded or not—may submit documents for publication in the Federal Register, and when OFR will publish those documents.

First, for unfunded agencies—which, as of today, would include all federal agencies whose budgets are subject to annual appropriations—OFR will only publish documents that are necessary to safeguard human life, protect property, or “provide other emergency services consistent with the performance of functions and services exempted under the Antideficiency Act.”   Agency materials related to “ongoing, regular functions of government” that pose no “imminent threat” to human safety or property protection are not permissible activities for unfunded agencies, and therefore inappropriate for OFR publication. 

This restriction would have significant implications for agency regulatory efforts across the government, with impact on a wide range of regulated sectors.  In one notable example, the rulemaking to implement the President’s recent outbound investment executive order (which we and our colleagues have discussed here) would likely stall for the duration of the shutdown.  Nothing in the order or the Treasury Department’s advance notice of proposed rulemaking (ANPRM) suggests that the order—despite being issued under the International Emergency Economic Powers Act (IEEPA)—involves imminent threats to human safety or property.  Thus, while public comments on the ANPRM are due on September 28, before the end of the fiscal year, a subsequent lapse in the Treasury Department’s funding would prohibit agency staff from reviewing public comments, scheduling and taking meetings with stakeholders, and revising and finalizing the proposed rule.Continue Reading Looming Shutdown Elevates Congressional Review Act Threat for New Regulations