Photo of Ryan Burnette

Ryan Burnette

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain, artificial intelligence, and software development requirements.

Ryan also advises on Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) compliance, public policy matters, agency disputes, and government cost accounting, drawing on his prior experience in providing overall direction for the federal contracting system to offer insight on the practical implications of regulations. He has assisted industry clients with the resolution of complex civil and criminal investigations by the Department of Justice, and he regularly speaks and writes on government contracts, cybersecurity, national security, and emerging technology topics.

Ryan is especially experienced with:

Government cybersecurity standards, including the Federal Risk and Authorization Management Program (FedRAMP); DFARS 252.204-7012, DFARS 252.204-7020, and other agency cybersecurity requirements; National Institute of Standards and Technology (NIST) publications, such as NIST SP 800-171; and the Cybersecurity Maturity Model Certification (CMMC) program.
Software and artificial intelligence (AI) requirements, including federal secure software development frameworks and software security attestations; software bill of materials requirements; and current and forthcoming AI data disclosure, validation, and configuration requirements, including unique requirements that are applicable to the use of large language models (LLMs) and dual use foundation models.
Supply chain requirements, including Section 889 of the FY19 National Defense Authorization Act; restrictions on covered semiconductors and printed circuit boards; Information and Communications Technology and Services (ICTS) restrictions; and federal exclusionary authorities, such as matters relating to the Federal Acquisition Security Council (FASC).
Information handling, marking, and dissemination requirements, including those relating to Covered Defense Information (CDI) and Controlled Unclassified Information (CUI).
Federal Cost Accounting Standards and FAR Part 31 allocation and reimbursement requirements.

Prior to joining Covington, Ryan served in the Office of Federal Procurement Policy in the Executive Office of the President, where he focused on the development and implementation of government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.  While in government, Ryan helped develop several contracting-related Executive Orders, and worked with White House and agency officials on regulatory and policy matters affecting contractor disclosure and agency responsibility determinations, labor and employment issues, IT contracting, commercial item acquisitions, performance contracting, schedule contracting and interagency acquisitions, competition requirements, and suspension and debarment, among others.  Additionally, Ryan was selected to serve on a core team that led reform of security processes affecting federal background investigations for cleared federal employees and contractors in the wake of significant issues affecting the program.  These efforts resulted in the establishment of a semi-autonomous U.S. Government agency to conduct and manage background investigations.

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.

White House Receives Public Comments on AI Action Plan

On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6.  As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”

The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies.  The final AI Action Plan is expected by July of 2025.

NIST Launches New AI Standards Initiatives

The National Institute of Standards & Technology (“NIST”) announced several AI initiatives in March to advance AI research and the development of AI standards.  On March 19, NIST launched its GenAI Image Challenge, an initiative to evaluate generative AI “image generators” and “image discriminators,” i.e., AI models designed to detect if images are AI-generated.  NIST called on academia and industry research labs to participate in the challenge by submitting generators and discriminators to NIST’s GenAI platform.

On March 24, NIST released its final report on Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST AI 100-2e2025, with voluntary guidance for securing AI systems against adversarial manipulations and attacks.  Noting that adversarial attacks on AI systems “have been demonstrated under real-world conditions, and their sophistication and impacts have been increasing steadily,” the report provides a taxonomy of AI system attacks on predictive and generative AI systems at various stages of the “machine learning lifecycle.” 

On March 25, NIST announced the launch of an “AI Standards Zero Drafts project” that will pilot a new process for creating AI standards.  The new standards process will involve the creation of preliminary “zero drafts” of AI standards drafted by NIST and informed by rounds of stakeholder input, which will be submitted to standards developing organizations (“SDOs”) for formal standardization.  NIST outlined four AI topics for the pilot of the Zero Drafts project: (1) AI transparency and documentation about AI systems and data; (2) methods and metrics for AI testing, evaluation, verification, and validation (“TEVV”); (3) concepts and terminology for AI system designs, architectures, processes, and actors; and (4) technical measures for reducing synthetic content risks.  NIST called for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, with no set deadline for submitting responses.Continue Reading March 2025 AI Developments Under the Trump Administration

On April 3, the White House Office of Management and Budget (“OMB”) released two memoranda with AI guidance and requirements for federal agencies, Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos (collectively, the “new OMB AI Memos”), which rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 (“Biden OMB AI Memos”), shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”  The new OMB AI Memos implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (the “AI EO”), which directs the OMB to revise the Biden OMB AI Memos to make them consistent with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.” 

Overall, the new OMB AI Memos build on the frameworks established under President Trump’s 2020 Executive Order 13960 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and the Biden OMB AI Memos.  This is consistent with the AI EO, which noted that the Administration would “revise” the Biden AI Memos “as necessary.”  At the same time, the new OMB AI Memos include some significant differences from the Biden OMB’s approach in the areas discussed below (as well as other areas).

  • Scope & Definitions.  The OMB AI Use Memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies,” with certain exclusions for the Intelligence Community and the Department of Defense.  The memo defines “AI” by reference to Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019.  Like the Biden OMB AI Memos, the OMB AI Use Memo states that “no system should be considered too simple to qualify as covered AI due to a lack of technical complexity.”

    The OMB AI Procurement Memo applies to “AI systems or services that are acquired by or on behalf of covered agencies,” excluding the Intelligence Community, and includes “data systems, software, applications, tools, or utilities” that are “established primarily” for researching, developing, or implementing AI or where an “AI capability” is integrated into another process, operational activity, or technology system.  The memo excludes AI that is “embedded” in “common commercial products” that are widely available for commercial use and have “substantial non-AI purposes or functionalities,” along with AI “used incidentally by a contractor” during contract performance.  In other words, the policies are targeted at software that is primarily used for its AI capabilities, rather than on software that happens to incorporate AI.

Continue Reading OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  The first blog summarized key actions taken in the first weeks of the Trump Administration, including the revocation of President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” and the release of President Trump’s Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (“AI EO”).  This blog describes actions on AI taken by the Trump Administration in February 2025.

White House Issues Request for Information on AI Action Plan

On February 6, the White House Office of Science & Technology Policy (“OSTP”) issued a Request for Information (“RFI”) seeking public input on the content that should be in the White House’s yet-to-be-issued AI Action Plan.  The RFI marks the Trump Administration’s first significant step in implementing the very broad goals in the January 2025 AI EO, which requires Assistant to the President for Science & Technology Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to develop an “action plan” to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”  The RFI states that the AI Action Plan will “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.”

Specifically, the RFI seeks public comment on the “highest priority policy actions” that should be included in the AI Action Plan and encourages respondents to recommend “concrete” actions needed to address AI policy issues.  While noting that responses may “address any relevant AI policy topic,” the RFI provides 20 topics for potential input.  These topics are general and do not include specific questions or areas where particular input is needed.  The topics include: hardware and chips, data centers, energy consumption and efficiency, model and open-source development, data privacy and security, technical and safety standards, national security and defense, intellectual property, procurement, and export controls.  As of March 13, over 325 comments on the AI Action Plan have been submitted.  The public comment period ends on March 15, 2025.  Under the EO, the finalized AI Action Plan must be submitted to the President by mid-October of 2025.Continue Reading February 2025 AI Developments Under the Trump Administration

This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration.  This blog describes key actions on AI taken by the Trump Administration in January 2025.

Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure

Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration.  On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.”  This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land.  Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI. 

On January 14, and in tandem with the release of EO 14141, the Office of Management and Budget (“OMB”) issued Memorandum M-25-03 on “Implementation Guidance for the Federal Data Center Enhancement Act,” directing federal agencies to implement requirements related to the operation of data centers by federal agencies or government contractors.  Specifically, the memorandum requires federal agencies to regularly monitor and optimize data center electrical consumption, including through the use of automated tools, and to arrange for assessments by certified specialists of data center energy and water usage and efficiency, among other requirements.  Like EO 14141, Memorandum M-25-03 has yet to be rescinded by the Trump Administration.

Trump White House Revokes President Biden’s 2023 AI Executive Order

On January 20, President Trump issued Executive Order 14148 on “Initial Recissions of Harmful Executive Orders and Actions,” revoking dozens of Biden Administration executive actions, including the October 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” (“2023 AI EO”).  To implement these revocations, Section 3 of EO 14148 directs the White House Domestic Policy Council (“DPC”) and National Economic Council (“NEC”) to “review all Federal Government actions” taken pursuant to the revoked executive orders and “take all necessary steps to rescind, replace, or amend such actions as appropriate.”  EO 14148 further directs the DPC and NEC to submit, within 45 days of the EO, lists of additional Biden Administration orders, memoranda, and proclamations that should be rescinded and “replacement orders, memoranda, or proclamations” to “increase American prosperity.”  Finally, EO 14148 directs National Security Advisor Michael Waltz to initiate a “complete and thorough review” of all National Security Memoranda (“NSMs”) issued by the Biden Administration and recommend NSMs for recission within 45 days of the EO. Continue Reading January 2025 AI Developments – Transitioning to the Trump Administration

On January 14, 2025, the Biden Administration issued an Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure” (the “EO”), with the goals of preserving U.S. economic competitiveness and access to powerful AI models, preventing U.S. dependence on foreign infrastructure, and promoting U.S. clean energy production to power the development and operation of AI.  Pursuant to these goals, the EO outlines criteria and timeframes for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy resources, by private-sector entities on federal land.  The EO builds upon a series of actions on AI issued by the Biden Administration, including the October 2023 Executive Order on Safe, Secure, and Trustworthy AI and an October 2024 AI National Security Memorandum.

I. Federal Sites for AI Data Centers & Clean Energy Facilities

The EO contains various requirements for soliciting and leasing federal sites for AI infrastructure, including:

The EO directs the Departments of Defense (“DOD”) and Energy (“DOE”) to each identify and lease, by the end of 2027, at least three federal sites to private-sector entities for the construction and operation of “frontier AI data centers” and “clean energy facilities” to power them (“frontier AI infrastructure”).  Additionally, the EO directs the Department of the Interior (“DOI”) to identify (1) federal sites suitable for additional private-sector clean energy facilities as components of frontier AI infrastructure, and (2) at least five “Priority Geothermal Zones” suitable for geothermal power generation.  Finally, the EO directs the DOD and DOE to publish a joint list of ten high-priority federal sites that are most conducive for nuclear power capacities that can be readily available to serve AI data centers by December 31, 2035.

  • Public Solicitations.  By March 31, 2025, the DOD and DOE must launch competitive, 30-day public solicitations for private-sector proposals to lease federal land for frontier AI infrastructure construction.  In addition to identifying proposed sides for AI infrastructure construction, solicitations will require applicants to submit detailed plans regarding:
  • Timelines, financing methods, and technical construction plans for the site;
  • Proposed frontier AI training work to occur on the site once operational;
  • Use of high labor and construction standards at the site; and
  • Proposed lab-security measures, including personnel and material access requirements, associated with the operation of frontier AI infrastructure.

The DOD and DOE must select winning proposals by June 30, 2025, taking into account effects on competition in the broader AI ecosystem and other selection criteria, including an applicant’s proposed financing and funding sources; plans for high-quality AI training, resource efficiency, labor standards, and commercialization of IP developed at the site; safety and security measures and capabilities; AI workforce capabilities; and prior experience with comparable construction projects.  Continue Reading Biden Administration Releases Executive Order on AI Infrastructure

This is part of a series of Covington blogs on the implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs described the actions taken by various government agencies to implement the Cyber EO from June 2021 through October 2024.  This blog describes key actions taken to implement the Cyber EO, the U.S. National Cybersecurity Strategy, and other actions taken that support their general principles during November 2024. 

National Institute of Standards and Technology (“NIST”) Publishes Draft “Enhanced Security Requirements for Protecting Controlled Unclassified Information”

On November 13, 2024, NIST published a draft of Special Publication (“SP”) 800-172 Rev. 3 that “provides recommended security requirements to protect the confidentiality, integrity, and availability of [Controlled Unclassified Information] when it is resident in a nonfederal system and organization and is associated with a high value asset or critical program.”  In particular, the draft requirements “give organizations the capability to achieve a multidimensional, defense-in-depth protection strategy against advanced persistent threats . . . and help to ensure the resiliency of systems and organizations.”  The draft requirements “are intended for use by federal agencies in contractual vehicles or other agreements between those agencies and nonfederal organizations.”  In the publication, NIST stated that it does not expect that all requirements are needed “universally.”  Instead, the draft requirements are intended to be “selected by federal agencies based on specific mission needs and risks.”

These requirements serve as a supplement to NIST SP 800-171, and apply to particular high-risk entities.  To that end, the current version of this NIST SP 800-172 (i.e., Rev. 2) is used by the U.S. Department of Defense (“DoD”) for its forthcoming Cybersecurity Maturity Model Certification (“CMMC”) program, which we discussed in more detail here.  Specifically, contractors must implement twenty-four controls that DoD selected from SP 800-172 Rev. 2 in order to obtain the highest level of certification – Level 3.  Just as the CMMC Final Rule incorporated Rev. 2 of SP 800-171 (rather than Rev. 3), the CMMC program will not immediately incorporate SP 800-172 Rev. 3 requirements.  However, the draft requirements provide insight into how CMMC could evolve.Continue Reading November 2024 Developments Under President Biden’s Cybersecurity Executive Order and National Cybersecurity Strategy

On November 15, 2024, the Department of Defense (“DoD”) published a Notice of Proposed Rulemaking (“Proposed Rule”) entitled “Defense Federal Acquisition Regulation Supplement: Disclosure of Information Regarding Foreign Obligations.”  The Proposed Rule would impose new disclosure obligations on “Offeror[s]” (pre-award) and “Contractor[s]” (post-award) that are triggered in certain circumstances by review or by an obligation to allow review of their source or computer code either by a foreign government or a foreign person.  If the Proposed Rule takes effect, the obligations would apply to any “prospective contractor” or any existing contractor.  The Proposed Rule also does not distinguish between companies based in or outside the United States.

The Proposed Rule would implement the requirement of National Defense Authorization Act for Fiscal Year 2019 (“NDAA”) section 1655 which states that “[DoD] may not use a product, service, or system procured or acquired after the date of the enactment of this Act relating to information or operational technology, cybersecurity, an industrial control system, or weapons system provided by a person unless that person” makes certain disclosures related to: (1) foreign government or foreign person access to computer or source code, and (2) the person’s Export Administration Regulations (“EAR”) or International Traffic in Arms Regulations (“ITAR”) applications or licenses.  Importantly, per the NDAA, these disclosure obligations include activities dating back to August 13, 2013.

A summary of the obligations and key definitions as described by the Proposed Rule are below.

Disclosure Obligations

Disclosure of Source or Computer Code

The Proposed Rule would require any “Offeror” or “Contractor” for defense contracts to disclose in the Catalog Data Standard in the Electronic Data Access (“EDA”) system (https://piee.eb.mil) “[w]hether, and if so, when, at any time after August 12, 2013,” they (1) “allowed a foreign person or foreign government to review” or (2) “[are] under any obligation to allow a foreign person or foreign government to review, as a condition of entering into an agreement for sale or other transaction with a foreign government or with a foreign person on behalf of such a government”:

  • “The source code for any product, system, or service that DoD is using or intends to use; or
  • The computer code for any other than commercial product, system, or service developed for DoD.”

When this clause is included in a solicitation, by submitting its offer to the government or higher tier contractor, an “Offeror” is representing that it “has completed the foreign obligation disclosures in EDA and the disclosures are current, accurate, and complete.”  For post-award disclosures, the requirements would most likely first be added in new task orders, delivery orders, and options. Continue Reading Department of Defense Publishes Notice of Proposed Rulemaking on Disclosure of Computer and Source Code to Foreign Entities

On Tuesday, October 22, 2024, Pennsylvania State University (“Penn State”) reached a settlement with the Department of Justice (“DoJ”), agreeing to pay the US Government (“USG”) $1.25M for alleged cybersecurity compliance violations under the False Claims Act (“FCA”).  This settlement follows a qui tam action filed by a whistleblower and former employee of Penn State’s Applied Research Laboratory.  The settlement agreement provides some additional insight into the priorities of DoJ’s Civil Cyber Fraud Initiative (“CFI”) and the types of cybersecurity issues of interest to the Department.  It also highlights the extent to which DoJ is focusing on the full range of cybersecurity compliance obligations that exist in a company’s contract in enforcement actions.

DoJ’s Civil Cyber-Fraud Initiative

On October 6, 2021, following a series of ransomware and other cyberattacks on government contractors and other public and private entities, DoJ announced the CFI.  We covered the CFI as it was first announced in more detail here, and in a comprehensive separately published article here.  As explained by Deputy Attorney General Lisa Monaco and other DoJ officials, DoJ is using the civil FCA to pursue government contractors and grantees that fail to comply with mandatory cyber incident reporting requirements and other regulatory or contractual cybersecurity requirements.  Moreover, depending on the facts, DoJ Criminal likely will be interested in some of these cases.

About the Settlement

On October 5, 2022, a relator – the former chief information officer for Penn State’s Applied Research Laboratory – filed a qui tam action in the United States District Court of the Eastern District of Pennsylvania.  The relator alleged in an amended complaint from 2023 that he discovered and raised non-compliance issues, which Penn State management did not address, and that Penn State falsified compliance documentation.  On October 23, 2024, DoJ formally intervened and notified the court that it reached a settlement agreement with Penn State.  The settlement agreement alleges that Penn State violated the FCA by failing to implement adequate safeguards and to meet cybersecurity requirements set forth under National Institute of Standards and Technology (“NIST”) Special Publication (“SP”) 800-171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.”  As set forth in the settlement agreement, these issues related to fifteen contracts and subcontracts involving the Department of Defense (“DoD”) and the National Aeronautics and Space Administration (“NASA”) between January 2018 and November 2023. Continue Reading Penn State Agrees to Pay $1.25M in Settlement for Cybersecurity Non-Compliance False Claims Act Allegations

This is part of a series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs  described the actions taken by

Continue Reading March 2024 Developments Under President Biden’s Cybersecurity Executive Order, National Cybersecurity Strategy, and AI Executive Order

This is the thirty-fourth in a series of Covington blogs on implementation of Executive Order 14028, “Improving the Nation’s Cybersecurity,” issued by President Biden on May 12, 2021 (the “Cyber EO”).  The first blog summarized the Cyber EO’s key provisions and timelines, and the subsequent blogs describes described the actions taken by various government agencies to implement the Cyber EO from June 2021through January 2024.  This blog describes key actions taken to implement the Cyber EO, as well as the U.S. National Cybersecurity Strategy, during February 2024.  It also describes key actions taken during February 2024 to implement President Biden’s Executive Order on Artificial Intelligence (the “AI EO”), particularly its provisions that impact cybersecurity, secure software, and federal government contractors. 

NIST Publishes Cybersecurity Framework 2.0

            On February 26, 2024, the U.S. National Institute of Standards and Technology (“NIST”) published version 2.0 of its Cybersecurity Framework.  The NIST Cybersecurity Framework (“CSF” or “Framework”) provides a taxonomy of high-level cybersecurity outcomes that can be used by any organization, regardless of its size, sector, or relative maturity, to better understand, assess, prioritize, and communicate its cybersecurity efforts.  CSF 2.0 makes some significant changes to the Framework, particularly in the areas of Governance and Cybersecurity Supply Chain Risk Management (“C-SCRM”).  Covington’s Privacy and Cybersecurity group has posted a blog that discusses CSF 2.0 and those changes in greater detail.

NTIA Requests Comment Regarding “Open Weight”

Dual-Use Foundation AI Models

            Also on February 26, the National Telecommunications and Information Administration (“NTIA”) published a request for comments on the risks, benefits, and possible regulation of “dual-use foundation models for which the model weights are widely available.”  Among other questions raised by NTIA in the document are whether the availability of public model weights could pose risks to infrastructure or the defense sector.  NTIA is seeking comments in order to prepare a report that the AI EO requires by July 26, 2024 on the risks and benefits of private companies making the weights of their foundational AI models publicly available.  NTIA’s request for comments notes that “openness” or “wide availability” are terms without clear definition, and that “more information [is] needed to detail the relationship between openness and the wide availability of both model weights and open foundation models more generally.”  NTIA also requests comments on potential regulatory regimes for dual-use foundation models with widely available model weights, as well as the kinds of regulatory structures “that could deal with not only the large scale of these foundation models, but also the declining level of computing resources needed to fine-tune and retrain them.”Continue Reading February 2024 Developments Under President Biden’s Cybersecurity Executive Order, National Cybersecurity Strategy, and AI Executive Order