Photo of Holly Fechner

Holly Fechner

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of the Covington’s Technology Industry Group and a member of the Covington Political Action Committee board of directors.

Holly works with clients to:

  • Develop compelling public policy strategies
  • Research law and draft legislation and policy
  • Draft testimony, comments, fact sheets, letters and other documents
  • Advocate before Congress and the Executive Branch
  • Form and manage coalitions
  • Develop communications strategies

She is the Executive Director of Invent Together and a visiting lecturer at the Harvard Kennedy School of Government. She serves on the board of directors of the American Constitution Society.

Holly served as Policy Director for Senator Edward M. Kennedy (D-MA) and Chief Labor and Pensions Counsel for the Senate Health, Education, Labor & Pensions Committee.

She received The American Lawyer, "Dealmaker of the Year" award. in 2019. The Hill named her a “Top Lobbyist” from 2013 to the present, and she has been ranked by Chambers USA - America's Leading Business Lawyers from 2012 to the present.

On February 20, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) announced a new Artificial Intelligence (AI) task force in the House of Representatives, with the goal of developing principles and policies to promote U.S. leadership and security with respect to AI.  Rep. Jay Olbernolte (R-CA) will chair the task force, joined by

On December 12, the U.S. House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party (the “Select Committee”) adopted a broad set of policy recommendations intended to reduce the United States’ economic and technological ties with China across a broad swath of the economy.

The Select Committee passed the 53-page report, containing 130 recommendations, on a bipartisan, though not unanimous, voice vote.  The report is organized around three pillars:

  1. “Reset the Terms of Our Economic Relationship with the PRC,” emphasizing the scope of the United States’ strategic dependence on China;
  2. “Stem the Flow of U.S. Capital and Technology Fueling the PRC’s Military Modernization and Human Rights Abuses,” which calls for increasingly hawkish trade and investment-review policies; and
  3. “Invest in Technological Leadership and Build Collective Economic Resilience in Concert with Allies,” focused on strengthening the workforce, critical supply chains, and related capabilities.

The report urges Congress and the Administration to deploy a variety of tools to compete with China, including by building on the Biden Administration’s recent executive orders on artificial intelligence and outbound investment.  With respect to trade, the Select Committee recommends implementing stricter export controls and moving China to a new tariff column, effectively revoking its permanent normal trade relations (PNTR) status.  Furthermore, the report calls for broadly expanding authorities for the Committee on Foreign Investment in the United States (CFIUS), as well as for investments in international economic development to counter China’s efforts to influence the economic affairs of trading partners through its Belt and Road initiative.  The report recommends several steps to protect U.S. innovators from intellectual-property-related abuses and sanction companies in China that threaten U.S. national security.Continue Reading House Select Committee report urges “new path” for economic engagement with China

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

Unless Congress reaches an agreement to keep the lights on, the U.S. government appears headed for a shutdown at midnight on October 1.  As the deadline looms, stakeholders should not let the legislative jockeying overshadow another consequence of a funding lapse: regulatory delay.  Under normal circumstances, federal agencies publish thousands of rules per year, covering agriculture, health care, transportation, financial services, and a host of other issues.  In a shutdown, however, most agency proceedings to develop and issue these regulations would grind to a halt, and a prolonged funding gap would lead to uncertainty for stakeholders, particularly as the 2024 elections approach.  Another consequence is that more regulations could become vulnerable to congressional disapproval under the Congressional Review Act (CRA).

The Administrative Procedure Act (APA) provides that “General notice of proposed rulemaking shall be published in the Federal Register,” and prescribes requirements for the contents of an agency rulemaking notice.  To initiate or finalize a rulemaking, agencies must submit rules to the Office of the Federal Register (OFR)—the National Archives and Records Administration agency that publishes the Federal Register, the government’s daily journal of rules, regulations, and other activities.

When government funding lapses, however, publication of critical rulemaking documents slows.  Under the Antideficiency Act, both an agency seeking to publish documents and OFR are prohibited from spending or obligating funds during a shutdown.  Government agencies are also prohibited from accepting voluntary services for government work—except in cases of “emergencies involving the safety of human life or the protection of property.”

These restrictions significantly curtail the ability of agencies to initiate new rulemakings that don’t qualify for the exception, or to advance rulemakings that began before a shutdown.  Federal employees are not allowed to draft or submit rules for publication, accept or review public comments, or revise or publish final rules while their agency is unfunded. 

Likewise, because the APA and other statutes require certain executive actions to be published in the Federal Register—OFR must also be funded and operational to publish most agency documents, even if the agency publishing the document has not experienced a funding lapse.

To account for this issue, OFR has set forth guidelines in advance of prior shutdowns that detail when agencies—funded or not—may submit documents for publication in the Federal Register, and when OFR will publish those documents.

First, for unfunded agencies—which, as of today, would include all federal agencies whose budgets are subject to annual appropriations—OFR will only publish documents that are necessary to safeguard human life, protect property, or “provide other emergency services consistent with the performance of functions and services exempted under the Antideficiency Act.”   Agency materials related to “ongoing, regular functions of government” that pose no “imminent threat” to human safety or property protection are not permissible activities for unfunded agencies, and therefore inappropriate for OFR publication. 

This restriction would have significant implications for agency regulatory efforts across the government, with impact on a wide range of regulated sectors.  In one notable example, the rulemaking to implement the President’s recent outbound investment executive order (which we and our colleagues have discussed here) would likely stall for the duration of the shutdown.  Nothing in the order or the Treasury Department’s advance notice of proposed rulemaking (ANPRM) suggests that the order—despite being issued under the International Emergency Economic Powers Act (IEEPA)—involves imminent threats to human safety or property.  Thus, while public comments on the ANPRM are due on September 28, before the end of the fiscal year, a subsequent lapse in the Treasury Department’s funding would prohibit agency staff from reviewing public comments, scheduling and taking meetings with stakeholders, and revising and finalizing the proposed rule.Continue Reading Looming Shutdown Elevates Congressional Review Act Threat for New Regulations

Earlier this month the Biden Administration released its long-anticipated Executive Order on Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern (“EO”), which imposes (1) prohibitions on certain outbound investments in the semiconductors and microelectronics, quantum information technologies, and artificial intelligence sectors, and (2) mandatory notification requirements for a

Today, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal to develop legislation to promote and regulate artificial intelligence. In a speech at the Center for Strategic & International Studies, Leader Schumer remarked: “[W]ith AI, we cannot be ostriches sticking our heads in the sand. The question is: what role [do] Congress

Congressional scrutiny of the U.S. relationship with China marched forward this week as Representatives Rosa DeLauro (D-CT), Bill Pascrell (D-NJ), and Brian Fitzpatrick (R-PA) reintroduced a new and expanded version of the National Critical Capabilities Defense Act (NCCDA)—legislation to create a national security review process for “outbound” transactions by U.S. companies investing overseas.

The bill

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

Today the National Telecommunications and Information Administration (NTIA) released its first notice of funding opportunity for development of next-generation wireless infrastructure under the new Public Wireless Supply Chain Innovation Fund (“Innovation Fund”).  According to NTIA’s announcement, this first tranche of funding will include up to $140.5 million in grants, ranging from $250,000 to $50 million