Photo of Matthew Shapanka

Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing on more than 15 years of experience on Capitol Hill and in private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels and represents businesses in legislative and regulatory matters involving intellectual property, national security, regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and other tech policy issues. He also represents clients facing congressional investigations or inquiries across a range of committees and subject matters.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

With Congress in summer recess and state legislative sessions waning, the Biden Administration continues to implement its October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”).  On July 26, the White House announced a series of federal agency actions under the EO

Continue Reading Federal Agencies Continue Implementation of AI Executive Order
Abstract Connection Concept_jpg

This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors.  This is a fast-growing area that is seeing significant levels of public and private investment activity.  We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.

Quantum Computing

Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers.  Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”).  The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers. 

The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”).  However, advances in quantum computing may also lead to some risks, the most significant being to data protection.  Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data. 

This is a rapidly developing area that governments are only just turning their attention to.  Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US

This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors.  This is a fast-growing area that is seeing significant levels of public and private investment activity.  We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.

Quantum Computing

Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers.  Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”).  The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers. 

The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”).  However, advances in quantum computing may also lead to some risks, the most significant being to data protection.  Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data. 

This is a rapidly developing area that governments are only just turning their attention to.  Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US

With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”

The letter proposes “a handful of specific areas” for revision, including:

  • Refining SB 205’s definition of AI systems to focus on “the most high-risk systems” in order to align with federal measures and frameworks in states with substantial technology sectors.  This goal aligns with the officials’ call for “harmony across any regulatory framework adopted by states” to “limit the burden associated with a multi-state compliance scheme that deters investment and hamstrings small technology firms.”  The officials add that they “remain open to delays in the implementation” of the new law “to ensure such harmonization.”  
  • Narrowing SB 205’s requirements to focus on developers of high-risk systems and avoid regulating “small companies that may deploy AI within third-party software that they use in the ordinary course of business.”  This goal addresses concerns of Colorado businesses that the new law could “inadvertently impose prohibitively high costs” on AI deployers.
  • Shifting from a “proactive disclosure regime” to a “traditional enforcement regime managed by the Attorney General investigating matters after the fact.”  This goal also focuses on protecting Colorado’s small businesses from prohibitively high costs that could deter investment and hamper Colorado’s technology sector.

Continue Reading Colorado and California Continue to Refine AI Legislation as Legislative Sessions Wane

Nearly a year after Senate Majority Leader Chuck Schumer (D-NY) launched the SAFE Innovation Framework for artificial intelligence (AI) with Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), the bipartisan group has released a 31-page “Roadmap” for AI policy.  The overarching theme of the Roadmap is “harnessing the full potential of AI while minimizing the risks of AI in the near and long term.”

In contrast to Europe’s approach to regulating AI, the Roadmap does not propose or even contemplate a comprehensive AI law.  Rather, it identifies key themes and areas of agreement and directs the relevant congressional committees of jurisdiction to legislate on key issues.  The Roadmap recommendations are informed by the nine AI Insight Forums that the bipartisan group convened over the last year.

  • Supporting U.S. Innovation in AI.  The Roadmap recommends least $32 billion in funding per year for non-defense AI innovation, and the authors call on the Appropriations Committee to “develop emergency appropriations language to fill the gap between current spending levels and the [National Security Commission on AI (NSCAI)]-recommended level,” suggesting the bipartisan group would like to see Congress increase funding for AI as soon as this year. The funding would cover a host of purposes, such as AI R&D, including AI chip design and manufacture; funding the outstanding CHIPS and Science Act accounts that relate to AI; and AI testing and evaluation at NIST.
    • This pillar also endorses the bipartisan Creating Resources for Every American to Experiment with Artificial Intelligence (CREATE AI) Act (S. 2714), which would broaden nonprofit and academic researchers’ access to AI development resources including computing power, datasets, testbeds, and training through a new National Artificial Intelligence Research Resource.  The Roadmap also supports elements of the Future of AI Innovation Act (S. 4178) related to “grand challenge” funding programs, which aim to accelerate AI development through prize competitions and federal investment initiatives.
    • The bipartisan group recommends including funds for the Department of Defense and DARPA to address national security threats and opportunities in the emergency funding measure.  
  • AI and the Workforce.  The Roadmap recommends committees of jurisdiction consider the impact of AI on U.S. workers and ensure that working Americans benefit from technological progress, including through training programs and by studying the impacts of AI on workers.  Importantly, the bipartisan group recommends legislation to “improve the U.S. immigration system for high-skilled STEM workers.”  The Roadmap does not address benefit programs for displaced workers.

Continue Reading Bipartisan Senate AI Roadmap Released

In the absence of congressional action on comprehensive artificial intelligence (AI) legislation, state legislatures are forging ahead with groundbreaking bills to regulate the rapidly advancing technology.  On May 8, the Colorado House of Representatives passed SB 205, a far-reaching and comprehensive AI bill, on a 41-22-2 vote.  The

Continue Reading Colorado Becomes the First State to Pass Comprehensive AI Legislation

As the 2024 elections approach and the window for Congress to consider bipartisan comprehensive artificial intelligence (AI) legislation shrinks, California officials are attempting to guard against a generative AI free-for-all—at least with respect to state government use of the rapidly advancing technology—by becoming the largest state to issue rules for

Continue Reading California establishes working guidance for AI procurement

Senate Commerce Committee Chair Maria Cantwell (D-WA) and Senators Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN) recently introduced the Future of AI Innovation Act, a legislative package that addresses key bipartisan priorities to promote AI safety, standardization, and access.  The bill would also advance U.S. leadership

Continue Reading New Bipartisan Senate Legislation Aims to Bolster U.S. AI Research and Deployment

With the 2024 election rapidly approaching, the Biden Administration must race to finalize proposed agency actions as early as mid-May to avoid facing possible nullification if the Republican Party controls both chambers of Congress and the White House next year. 

The Congressional Review Act (CRA) allows Congress to overturn rules issued by the Executive Branch by enacting a joint resolution of disapproval that cancels the rule and prohibits the agency from issuing a rule that is “substantially the same.”  One of the CRA’s most unique features—a 60-day “lookback period”—allows the next Congress 60 days to review rules issued near the end of the last Congress.  This means that the Administration must finalize and publish certain rules long before Election Day to avoid being eligible for CRA review in the new year.

Overview of the CRA

The CRA requires federal agencies to submit all final rules to Congress before the rule may take effect.  It provides the House with 60 legislative days and the Senate with 60 session days to introduce a joint resolution of disapproval to overturn the rule.  This 60-day period counts every calendar day, including weekends and holidays, but excludes days that either chamber is out of session for more than three days pursuant to an adjournment resolution.  In the Senate, a joint resolution of disapproval receives only limited debate and may not be filibustered.  Moreover, if it has been more than 20 calendar days since Congress received a final rule and a joint resolution has not been reported out of the appropriate committee, a group of 30 Senators can file a petition to force a floor vote on the petition.   

If a CRA resolution receives a simple majority in both chambers and is signed by the President, or if Congress overrides a presidential veto, the rule cannot go into effect and is treated “as though such rule had never taken effect.”[1]  The agency is also barred from reissuing a rule that is “substantially the same,” unless authorized by future law.[2]    

Election Year Threat: CRA Lookback Period

These procedures pose special challenges for federal agencies in an election year.  If a rule is submitted to Congress within 60 days before adjournment, the CRA’s lookback provision allows the 60-day timeline to introduce a CRA resolution to start over in the next session of Congress.

This procedure ultimately requires the current administration to assess the threat of a CRA resolution against certain rules and determine whether to issue the rule safely before the deadline or risk a potential CRA challenge. Continue Reading Congressional Review Act Threat Looms Over Biden Administration Rulemakings

On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial

Continue Reading California Senate Committee Advances Comprehensive AI Bill