On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.

The OMB guidance—Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence— defines AI broadly to include machine learning and “[a]ny artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets” among other things.  It directs federal agencies and departments to address risks from the use of AI, expand public transparency, advance responsible AI innovation, grow an AI-focused talent pool and workforce, and strengthen AI governance systems.  Federal agencies must implement proscribed safeguard practices no later than December 1, 2024.

More specifically, the guidance includes a number of requirements for federal agencies, including:

  • Expanded Governance:  The guidance requires agencies to designate Chief AI Officers responsible for coordinating agency use of AI, promoting AI adoption, and managing risk, within 60 days.  It also requires each agency to convene an AI governance body within 60 days.  Within 180 days, agencies must submit to OMB and release publicly an agency plan to achieve consistency with OMB’s guidance.
  • Inventories:  Each agency (except the Department of Defense and those that comprise the intelligence community) is required to inventory its AI use cases at least annually and submit a report to OMB.  Some use cases are exempt from being reported individually, but agencies still must report aggregate metrics about those use cases to OMB if otherwise in scope.  The guidance states that OMB will later issue “detailed instructions” for these reports.
  • Removing Barriers to Use of AI:  The guidance focuses on removing barriers to the responsible use of AI, including by ensuring that adequate infrastructure exists for AI projects and that agencies have sufficient capacity to manage data used for training, testing, and operating AI.  As part of this, the guidance states that agencies “must proactively share their custom-developed code — including models and model weights — for AI application in active use and must release and maintain that code as open-source software on a public repository,” subject to some exceptions (e.g., if sharing is restricted by law or required by a contractual obligation). 
  • Special Requirements for FedRAMP.  The guidance calls for updates to the Federal Risk and Authorization Management Program (FedRAMP), which generally applies to cloud services that are sold to the U.S. Government.  Specifically, the guidance requires agencies to make updates to authorization processes for FedRAMP services, including by advancing continuous authorizations (different from annual authorizations) for services with AI.  The guidance also encourages agencies to prioritize critical and emerging technologies and generative AI in issuing Authorizations to Operate (ATOs). 
  • Risk Management:  For certain “safety-impacting” and “rights-impacting” AI use cases, some agencies will need to adopt minimum risk management practices.  These include the completion of an AI impact assessment that examines, for example, the intended purpose for AI, expected benefits, and potential risks.  The minimum practices also require the agency to test AI for performance in a real-world context and conduct ongoing monitoring of the system.  Among other requirements, the agency will be responsible for identifying and assessing AI’s impact on equity and fairness and taking steps to mitigate algorithmic discrimination when present.  The guidance presents these practices as initial baseline tasks and requires agencies to identify additional context-specific risks for relevant use cases to be addressed by applying best practices for AI risk-management, such as those from the National Institute of Standards and Technology (NIST) AI Management Framework.  The guidance also calls for human oversight of safety- and rights-impacting AI decision making and remedy processes for affected individuals.  Agencies must implement these minimum practices no later than December 1, 2024.

Separately but relatedly, OMB issued an RFI on March 29, 2024, to inform future action governing the responsible procurement of AI  under federal contracts. The RFI seeks responses to several questions designed to provide OMB with information to enable it and/or federal agencies to craft contract language and requirements that will further agency AI use and innovation while managing its risks and performance.  Responses to these questions, as well as any other comments on the subject, are due by April 29, 2024.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Yaron Dori Yaron Dori

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the…

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the firm’s eight-person Management Committee.

Yaron’s practice advises clients on strategic planning, policy development, transactions, investigations and enforcement, and regulatory compliance.

Early in his career, Yaron advised telecommunications companies and investors on regulatory policy and frameworks that led to the development of broadband networks. When those networks became bidirectional and enabled companies to collect consumer data, he advised those companies on their data privacy and consumer protection obligations. Today, as new technologies such as Artificial Intelligence (AI) are being used to enhance the applications and services offered by such companies, he advises them on associated legal and regulatory obligations and risks. It is this varied background – which tracks the evolution of the technology industry – that enables Yaron to provide clients with a holistic, 360-degree view of technology policy, regulation, compliance, and enforcement.

Yaron represents clients before federal regulatory agencies—including the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), and the Department of Commerce (DOC)—and the U.S. Congress in connection with a range of issues under the Communications Act, the Federal Trade Commission Act, and similar statutes. He also represents clients on state regulatory and enforcement matters, including those that pertain to telecommunications, data privacy, and consumer protection regulation. His deep experience in each of these areas enables him to advise clients on a wide range of technology regulations and key business issues in which these areas intersect.

With respect to technology and telecommunications matters, Yaron advises clients on a broad range of business, policy and consumer-facing issues, including:

  • Artificial Intelligence and the Internet of Things;
  • Broadband deployment and regulation;
  • IP-enabled applications, services and content;
  • Section 230 and digital safety considerations;
  • Equipment and device authorization procedures;
  • The Communications Assistance for Law Enforcement Act (CALEA);
  • Customer Proprietary Network Information (CPNI) requirements;
  • The Cable Privacy Act
  • Net Neutrality; and
  • Local competition, universal service, and intercarrier compensation.

Yaron also has extensive experience in structuring transactions and securing regulatory approvals at both the federal and state levels for mergers, asset acquisitions and similar transactions involving large and small FCC and state communication licensees.

With respect to privacy and consumer protection matters, Yaron advises clients on a range of business, strategic, policy and compliance issues, including those that pertain to:

  • The FTC Act and related agency guidance and regulations;
  • State privacy laws, such as the California Consumer Privacy Act (CCPA) and California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, the Virginia Consumer Data Protection Act, and the Utah Consumer Privacy Act;
  • The Electronic Communications Privacy Act (ECPA);
  • Location-based services that use WiFi, beacons or similar technologies;
  • Digital advertising practices, including native advertising and endorsements and testimonials; and
  • The application of federal and state telemarketing, commercial fax, and other consumer protection laws, such as the Telephone Consumer Protection Act (TCPA), to voice, text, and video transmissions.

Yaron also has experience advising companies on congressional, FCC, FTC and state attorney general investigations into various consumer protection and communications matters, including those pertaining to social media influencers, digital disclosures, product discontinuance, and advertising claims.

Photo of Robert Huffman Robert Huffman

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing…

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing information security and the reporting of cyber incidents, the Cybersecurity Maturity Model Certification (CMMC) program, the requirements for secure software development self-attestations and bills of materials (SBOMs) emanating from the May 2021 Executive Order on Cybersecurity, and the various requirements for responsible AI procurement, safety, and testing currently being implemented under the October 2023 AI Executive Order. 

Bob also represents contractors in False Claims Act (FCA) litigation and investigations involving cybersecurity and other technology compliance issues, as well more traditional government contracting costs, quality, and regulatory compliance issues. These investigations include significant parallel civil/criminal proceedings growing out of the Department of Justice’s Cyber Fraud Initiative. They also include investigations resulting from False Claims Act qui tam lawsuits and other enforcement proceedings. Bob has represented clients in over a dozen FCA qui tam suits.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including those arising under the Buy American Act/Trade Agreements Act and Section 889 of the FY2019 National Defense Authorization Act. In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial products, services, and software. He focuses this aspect of his practice on the overlap of these traditional government contracts IP rules with the IP issues associated with the acquisition of AI services and the data needed to train the large learning models on which those services are based. 

Bob is ranked by Chambers USA for his work in government contracts and he writes extensively in the areas of procurement-related AI, cybersecurity, software security, and supply chain regulation. He also teaches a course at Georgetown Law School that focuses on the technology, supply chain, and national security issues associated with energy and climate change.

Photo of Ryan Burnette Ryan Burnette

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain…

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain, artificial intelligence, and software development requirements.

Ryan also advises on Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) compliance, public policy matters, agency disputes, and government cost accounting, drawing on his prior experience in providing overall direction for the federal contracting system to offer insight on the practical implications of regulations. He has assisted industry clients with the resolution of complex civil and criminal investigations by the Department of Justice, and he regularly speaks and writes on government contracts, cybersecurity, national security, and emerging technology topics.

Ryan is especially experienced with:

  • Government cybersecurity standards, including the Federal Risk and Authorization Management Program (FedRAMP); DFARS 252.204-7012, DFARS 252.204-7020, and other agency cybersecurity requirements; National Institute of Standards and Technology (NIST) publications, such as NIST SP 800-171; and the Cybersecurity Maturity Model Certification (CMMC) program.
  • Software and artificial intelligence (AI) requirements, including federal secure software development frameworks and software security attestations; software bill of materials requirements; and current and forthcoming AI data disclosure, validation, and configuration requirements, including unique requirements that are applicable to the use of large language models (LLMs) and dual use foundation models.
  • Supply chain requirements, including Section 889 of the FY19 National Defense Authorization Act; restrictions on covered semiconductors and printed circuit boards; Information and Communications Technology and Services (ICTS) restrictions; and federal exclusionary authorities, such as matters relating to the Federal Acquisition Security Council (FASC).
  • Information handling, marking, and dissemination requirements, including those relating to Covered Defense Information (CDI) and Controlled Unclassified Information (CUI).
  • Federal Cost Accounting Standards and FAR Part 31 allocation and reimbursement requirements.

Prior to joining Covington, Ryan served in the Office of Federal Procurement Policy in the Executive Office of the President, where he focused on the development and implementation of government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.  While in government, Ryan helped develop several contracting-related Executive Orders, and worked with White House and agency officials on regulatory and policy matters affecting contractor disclosure and agency responsibility determinations, labor and employment issues, IT contracting, commercial item acquisitions, performance contracting, schedule contracting and interagency acquisitions, competition requirements, and suspension and debarment, among others.  Additionally, Ryan was selected to serve on a core team that led reform of security processes affecting federal background investigations for cleared federal employees and contractors in the wake of significant issues affecting the program.  These efforts resulted in the establishment of a semi-autonomous U.S. Government agency to conduct and manage background investigations.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.