Photo of Micaela McMurrough

Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington's global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

On December 16, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (“Cyber AI Profile” or “Profile”).  According to the draft, the Cyber AI Profile is intended to “provide guidelines for managing cybersecurity risk related to AI systems [and] identify[] opportunities for using AI to enhance cybersecurity capabilities.”  The draft Profile uses the existing voluntary NIST Cybersecurity Framework (“CSF”) 2.0 — which “provides guidance to industry, government agencies, and other organizations to manage cybersecurity risks” — and overlays three AI Focus Areas (Secure, Detect, Thwart) on top of the CSF’s outcomes (Functions, Categories, and Subcategories) to suggest considerations for organizations to prioritize when securing AI implementations, using AI to enhance cybersecurity defenses, or defending against adversarial uses of AI.  This draft guidance will likely be familiar to organizations that already leverage the CSF 2.0 in their cybersecurity programs and might be complimentary to existing frameworks that organizations already have in place.  Even so, the outcomes are designed to be flexible such that a range of organizations (with mature or novel programs) can leverage the guidance to help manage AI-related cybersecurity risk.  Continue Reading NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for Artificial Intelligence for Public Comment

On December 19, New York Governor Kathy Hochul (D) signed the Responsible AI Safety & Education (“RAISE”) Act into law, making New York the second state in the nation to codify public safety disclosure and reporting requirements for developers of frontier AI models.  Prior to signing, Governor Hochul secured several

Continue Reading New York Governor Signs Frontier AI Safety Legislation

On October 21, 2025, the New York State Department of Financial Services (“NYDFS”) issued an industry letter (the “Guidance”) highlighting the cybersecurity risks related to Covered Entities’ use of Third-Party Service Providers (“TPSPs”) and providing strategies to address these risks. The Guidance is addressed to all Covered Entities subject to NYDFS’s cybersecurity regulation codified at 23 NYCRR Part 500 (“Cybersecurity Regulation”), which requires Covered Entities to implement a comprehensive cybersecurity program that includes written policies addressing TPSP risks as well as due diligence, contractual requirements, and periodic assessments for TPSPs. While the Guidance is explicit that it “does not impose any new requirements” beyond those already included in the Cybersecurity Regulation, it provides significant additional detail to clarify how to comply with existing requirements and offers industry best practices to mitigate TPSP-related cyber risks. As the Guidance suggests that NYDFS will continue to focus on TPSP-related cyber risks, Covered Entities should consider reviewing their TPSP oversight and management against the specific recommendations from the Guidance and adjusting their practices where appropriate. Alongside a review of TPSP oversight and management, Covered Entities may also consider reviewing their implementation of the provisions of the Cybersecurity Regulation requiring multifactor authentication, asset management, and data retention, which take effect on November 1, 2025.Continue Reading NYDFS Publishes Industry Guidance on Managing Cyber Risks Related to Third-Party Service Providers

On September 29, California Governor Gavin Newsom (D) signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”), establishing public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power.  TFAIA is the first frontier model safety

Continue Reading California Governor Signs Landmark AI Safety Legislation

The U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) plans to delay the publication of its much-anticipated cybersecurity incident reporting rule implementing the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”).  According to an entry on the Spring 2025 Unified Agenda of Regulatory and Deregulatory Actions, released on September

Continue Reading CISA Delays Cyber Incident Reporting Rule for Critical Infrastructure

On July 29, 2025, the National Institute of Standards & Technology (“NIST”) unveiled an outline for preliminary, stakeholder-driven standards, known as a “zero draft”, for AI testing, evaluation, verification and validation (“TEVV”).  This outline is part of NIST’s AI Standards Zero Drafts pilot project, which was announced on March 25, 2025, as we previously reported. The goal is to create a flexible, high-level framework for companies to design their own AI testing and validation procedures. Of note, NIST is not prescribing exact methods for testing and validation. Instead, it offers a structure around key terms, lifecycle stages, and guiding principles that align with future international standards. NIST has asked for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, and feedback is open until September 12, 2025.

The NIST outline breaks AI TEVV into several foundational elements, a non-exhaustive list of which includes:Continue Reading NIST Welcomes Comments for AI Standards Zero Drafts Project

Oklahoma recently enacted Senate Bill 626, which substantially amends the state’s data breach notification law to broaden the scope of notification obligations and add a new regulator notification requirement along with a new “safe harbor”-style provision that provides liability protections if certain security measures are implemented.  The changes to Oklahoma’s law follow changes to other state data breach notification laws within the past year, including New York’s addition of a 30-day deadline for notice to individuals (added in early 2025) and Pennsylvania’s addition of a regulator notification requirement and obligations to provide free credit monitoring (added in mid-2024).  Key updates from Oklahoma’s bill, which will go into effect on January 1, 2026, are discussed in further detail below.Continue Reading Oklahoma Substantially Amends Its Data Breach Notification Statute

On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda.  In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in

Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive Orders

On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  The report describes “frontier models” as the “most capable” subset of foundation models, or

Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model Regulation

On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul

Continue Reading New York Legislature Passes Sweeping AI Safety Legislation