Photo of Micaela McMurrough

Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington's global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

On October 16, 2024, the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) and the Federal Bureau of Investigation (“FBI”) published guidance on Product Security Bad Practices (the “Guidance”) that identifies “exceptionally risky” product security practices for software manufacturers.  The Guidance states that the ten identified practices—categorized as (1) Product Properties, (2) Security Features, or (3) Organizational Processes and Policies—are “dangerous and significantly elevate[] risk to national security, national economic security, and national public health and safety.”

The Guidance offers recommendations to remediate each of the identified practices and states that adoption of the recommendations indicates software manufacturers “are taking ownership of customer security outcomes.”  Provided below are the ten practices and associated recommendations.

I.               Product Properties

  • Development Not in Memory Safe Languages – The Guidance recommends software manufacturers protect against “memory safety vulnerabilities,” such as through the use of a memory safe language or protective hardware.
  • Inclusion of User-Provided Input in SQL Query Strings – The Guidance encourages product designs “that systematically prevent the introduction of SQL injection vulnerabilities, such as by consistently enforcing the use of parametrized queries.”
  • Inclusion of User-Provided Input in Operating System Command Strings – The Guidance recommends product designs “that systematically prevent[] command injection vulnerabilities, such as by consistently ensuring that command inputs are clearly delineated from the contents of a command itself.”
  • Presence of Default Passwords – The Guidance suggests the use of (among others) “instance-unique initial passwords,” requiring users to create new passwords during installation, and “time-limited setup passwords.”
  • Presence of Known Exploited Vulnerabilities – The Guidance states that known exploited vulnerabilities (“KEV”) should be patched before a product is deployed.  The Guidance also recommends that software manufacturers should offer a free and timely patch to customers when CISA’s catalog introduces a new KEV and advise customers “of the associated risks of not installing the patch.”
  • Presence of Open Source Software with Known Exploitable Vulnerabilities – The Guidance encourages software manufacturers to make “a reasonable effort to evaluate and secure their open source software dependencies.”  In particular, the Guidance recommends to conduct security scans on the initial and subsequent versions of open source software that are incorporated into the product and “[r]outinely monitor for Common Vulnerabilities and Exposures (CVEs) or other security-relevant alerts . . . in all open source software dependencies and update them as necessary,” among other recommended steps.  The Guidance further encourages the use of “a software bill of materials” to offer to customers.

Continue Reading CISA and FBI Publish Product Security Bad Practices

On September 17, 2024, the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) and the Federal Bureau of Investigation (“FBI”) published a Secure by Design Alert, cautioning senior executives and business leaders to be aware of and work to eliminate cross-site scripting (“XSS”) vulnerabilities in their products (the “Alert”).  XSS

Continue Reading CISA and FBI Publish a Secure by Design Alert to Eliminate Cross-Site Scripting Vulnerabilities

On March 27, 2024, the U.S. Cybersecurity and Infrastructure Security Agency’s (“CISA”) Notice of Proposed Rulemaking (“Proposed Rule”) related to the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”) was released on the Federal Register website.  The Proposed Rule, which will be formally published in the Federal Register on April 4, 2024, proposes draft regulations to implement the incident reporting requirements for critical infrastructure entities from CIRCIA, which President Biden signed into law in March 2022.  CIRCIA established two cyber incident reporting requirements for covered critical infrastructure entities: a 24-hour requirement to report ransomware payments and a 72-hour requirement to report covered cyber incidents to CISA.  While the overarching requirements and structure of the reporting process were established under the law, CIRCIA also directed CISA to issue the Proposed Rule within 24 months of the law’s enactment to provide further detail on the scope and implementation of these requirements.  Under CIRCIA, the final rule must be published by September 2025.

The Proposed Rule addresses various elements of CIRCIA, which will be covered in a forthcoming Client Alert.  This blog post focuses primarily on the proposed definitions of two pivotal terms that were left to further rulemaking under CIRCIA (Covered Entity and Covered Cyber Incident), which illustrate the broad scope of CIRCIA’s reporting requirements, as well as certain proposed exceptions to the reporting requirements.  The Proposed Rule will be subject to a review and comment period for 60 days after publication in the Federal Register. 

Covered Entities

CIRCIA broadly defined “Covered Entity” to include entities that are in one of the 16 critical infrastructure sectors established under Presidential Policy Directive 21 (“PPD-21”) and directed CISA to develop a more comprehensive definition in subsequent rulemaking.  Accordingly, the Proposed Rule (1) addresses how to determine whether an entity is “in” one of the 16 sectors and (2) proposed two additional criteria for the Covered Entity definition, either of which must be met in order for an entity to be covered.  Notably, the Proposed Rule’s definition of Covered Entity would encompass the entire corporate entity, even if only a constituent part of its business or operations meets the criteria.  Thus, Covered Cyber Incidents experienced by a Covered Entity would be reportable regardless of which part of the organization suffered the impact.  In total, CISA estimates that over 300,000 entities would be covered by the Proposed Rule.

Decision tree that demonstrates the overarching elements of the Covered Entity definition. For illustrative purposes only.Continue Reading CISA Issues Notice of Proposed Rulemaking for Critical Infrastructure Cybersecurity Incident Reporting

Technology companies are grappling with unprecedented changes that promise to accelerate exponentially in the challenging period ahead. We invite you to join Covington experts and invited presenters from around the world to explore the key issues faced by businesses developing or deploying cutting-edge technologies. These highly concentrated sessions are packed

Continue Reading Covington’s Fifth Annual Technology Forum – Looking Ahead: New Legal Frontiers for the Tech Industry

Earlier this month, the New York Department of Financial Services (“NYDFS”) announced that it had finalized the Second Amendment to its “first-in-the-nation” cybersecurity regulation, 23 NYCRR Part 500.  This Amendment implements many of the changes that NYDFS originally proposed in prior versions of the Second Amendment released for public comment in November 2022 and June 2023, respectively.  The first version of the Proposed Second Amendment proposed increased cybersecurity governance and board oversight requirements, the expansion of the types of policies and controls companies would be required to implement, the creation of a new class of companies subject to additional requirements, expanded incident reporting requirements, and the introduction of enumerated factors to be considered in enforcement decisions, among others.  The revisions in the second version reflect adjustments rather than substantial changes from the first version.  Compliance periods for the newly finalized requirements in the Second Amendment will be phased over the next two years, as set forth in additional detail below.

The finalized Second Amendment largely adheres to the revisions from the second version of the Proposed Second Amendment but includes a few substantive changes, including those described below:

  • The finalized Amendment removes the previously-proposed requirement that each class A company conduct independent audits of its cybersecurity program “at least annually.”  While the finalized Amendment does require each class A company to conduct such audits, they should occur at a frequency based on its risk assessments.  NYDFS stated that it made this change in response to comments that an annual audit requirement would be overly burdensome and with the understanding that class A companies typically conduct more than one audit annually.  See Section 500.2 (c).
  • The finalized Amendment updates the oversight requirements for the senior governing body of a covered entity with respect to the covered entity’s cybersecurity risk management.  Updates include, among others, a requirement to confirm that the covered entity’s management has allocated sufficient resources to implement and maintain a cybersecurity program.  This requirement was part of the proposed definition of “Chief Information Security Officer.”  NYDFS stated that it moved this requirement to the senior governing bodies in response to comments that CISOs do not typically make enterprise-wide resource allocation decisions, which are instead the responsibility of senior management.  See Section 500.4 (d).
  • The finalized Amendment removes a proposed additional requirement to report certain privileged account compromises to NYDFS.  NYDFS stated that it did so in response to public comments that this proposed requirement “is overbroad and would lead to overreporting.”  However, the finalized Amendment retains previously-proposed changes that will require covered entities to report certain ransomware deployments or extortion payments to NYDFS.  See Section 500.17 (a).

Continue Reading New York Department of Financial Services Finalizes Second Amendment to Cybersecurity Regulation

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

Earlier this week, the Securities and Exchange Commission (“SEC”) published an update to its rulemaking agenda indicating that it does not plan to approve two proposed cyber rules until at least October 2023 (the agenda’s timeframe is an estimate).  The proposed rules in question address disclosure requirements regarding cybersecurity governance

Continue Reading SEC Delays Cybersecurity Rules

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New

Continue Reading Artificial Intelligence & NYC Employers: New York City Seeks Publication of Proposed Rules That Would Regulate the Use of AI Tools in the Employment Context

This past week, co-defendants in a class action related to the theft of cryptocurrency engaged in their own lawsuit over alleged security failures.  IRA Financial Trust, a retirement account provider offering crypto-assets, sued class action co-defendant Gemini Trust Company, LLC, a crypto-asset exchange owned by the Winklevoss twins, following a

Continue Reading Litigation Between FinTech Companies Follows Class Action Over Cryptocurrency Theft