Photo of Nicholas Xenakis

Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving antitrust, artificial intelligence, bankruptcy, criminal justice, financial services, immigration, intellectual property, life sciences, national security, and technology. He also represents companies and individuals in investigations before U.S. Senate and House Committees.

Nick previously served as General Counsel for the U.S. Senate Judiciary Committee, where he managed committee staff and directed legislative efforts. He also participated in key judicial and Cabinet confirmations, including of Attorneys General and Supreme Court Justices. Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia.

The Senate Intelligence Committee’s January 30, 2025, confirmation hearing for former Representative Tulsi Gabbard, President Trump’s nominee for Director of National Intelligence, previewed a potentially difficult reauthorization path for Section 702 of the Foreign Intelligence Surveillance Act (“FISA”).  While Gabbard appears to now publicly favor reauthorization of Section 702, her

Continue Reading Tulsi Gabbard’s Confirmation Hearing for Director of National Intelligence: A Preview of a FISA Section 702 Reauthorization Fight?

On January 14, 2025, the Biden Administration issued an Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure” (the “EO”), with the goals of preserving U.S. economic competitiveness and access to powerful AI models, preventing U.S. dependence on foreign infrastructure, and promoting U.S. clean energy production to power the development and operation of AI.  Pursuant to these goals, the EO outlines criteria and timeframes for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy resources, by private-sector entities on federal land.  The EO builds upon a series of actions on AI issued by the Biden Administration, including the October 2023 Executive Order on Safe, Secure, and Trustworthy AI and an October 2024 AI National Security Memorandum.

I. Federal Sites for AI Data Centers & Clean Energy Facilities

The EO contains various requirements for soliciting and leasing federal sites for AI infrastructure, including:

The EO directs the Departments of Defense (“DOD”) and Energy (“DOE”) to each identify and lease, by the end of 2027, at least three federal sites to private-sector entities for the construction and operation of “frontier AI data centers” and “clean energy facilities” to power them (“frontier AI infrastructure”).  Additionally, the EO directs the Department of the Interior (“DOI”) to identify (1) federal sites suitable for additional private-sector clean energy facilities as components of frontier AI infrastructure, and (2) at least five “Priority Geothermal Zones” suitable for geothermal power generation.  Finally, the EO directs the DOD and DOE to publish a joint list of ten high-priority federal sites that are most conducive for nuclear power capacities that can be readily available to serve AI data centers by December 31, 2035.

  • Public Solicitations.  By March 31, 2025, the DOD and DOE must launch competitive, 30-day public solicitations for private-sector proposals to lease federal land for frontier AI infrastructure construction.  In addition to identifying proposed sides for AI infrastructure construction, solicitations will require applicants to submit detailed plans regarding:
  • Timelines, financing methods, and technical construction plans for the site;
  • Proposed frontier AI training work to occur on the site once operational;
  • Use of high labor and construction standards at the site; and
  • Proposed lab-security measures, including personnel and material access requirements, associated with the operation of frontier AI infrastructure.

The DOD and DOE must select winning proposals by June 30, 2025, taking into account effects on competition in the broader AI ecosystem and other selection criteria, including an applicant’s proposed financing and funding sources; plans for high-quality AI training, resource efficiency, labor standards, and commercialization of IP developed at the site; safety and security measures and capabilities; AI workforce capabilities; and prior experience with comparable construction projects.  Continue Reading Biden Administration Releases Executive Order on AI Infrastructure

At the end of his prior administration, President Trump tried to overhaul the federal workforce by making it easier to remove a substantial number of federal employees. With his incoming administration, President-elect Trump may try to do so again. Though Presidents have broad authority over federal employees, these renewed efforts may face new legal challenges because of a recent Biden Administration rule specifically intended to prevent a rollback of civil service protections.  Importantly, the rule itself recognizes federal employees’ long-standing reliance interests in their jobs that could make rescinding the new rule particularly difficult.

To go back to the end of the previous Trump Administration, on October 21, 2020, President Trump issued an “Executive Order on Creating Schedule F in the Excepted Service.”  That order created a new Schedule F for “[p]ositions of a confidential, policy-determining, policy-making, or policy-advocating character not normally subject to change as a result of Presidential transition.”  Simply put, it would have allowed the President to treat some career civil servants as political appointees and exempt them from Civil Service Rules and Regulations, including protections from removal, thereby giving the President expanded authority to remove federal employees at will.

Though President Trump’s order never went into effect, the Biden Administration nonetheless finalized a rule on April 4, 2024, that clearly responded to it.  That rule, titled “Upholding Civil Service Protections and Merit System Principles,” “clarifies and reinforces longstanding civil service protections and merit system principles[.]”  Interestingly, the rule’s preamble directly addresses a situation where “a future Administration,” such as the incoming Trump Administration, “seeks to rescind this rule and replace it with [Schedule F].”  The preamble goes on to read as a roadmap of the significant hurdles rollback efforts would face.  With that framing in mind, the rule explains that a future Administration, in complying with the Administrative Procedure Act (“APA”), would need to:Continue Reading Civil Service Protections in the Trump Administration

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.      Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan interest in passing federal legislation related to AI.  While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.

  • Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks. 
    • In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV).  The Act would require the National Institute of Standards and Technology (“NIST”) to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations. 
    • In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July.  Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.  
    • In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)— introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)—was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ).  The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.  
    • In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended.  Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for “high-impact” AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements.  The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
  • Senate Homeland Security and Governmental Affairs Committee:  In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495).  Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
  • National Defense Authorization Act for Fiscal Year 2025:  In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) (“NDAA”).  The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA.  The Transparent Automated Governance Act would require the Office of Management and Budget (“OMB”) to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems.  The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI.  The Act would also require the Office of Personnel Management (“OPM”) to establish a training program on AI for federal management officials and supervisors.   

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

In late August, the California legislature passed two bills that would limit the creation or use of “digital replicas,” making California the latest state to seek new protections for performers, artists, and other employees in response to the rise of AI-generated content.  These state efforts come as Congress considers the

Continue Reading California Passes Digital Replica Legislation as Congress Considers Federal Approach

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

U.S. Senate Majority Leader Chuck Schumer (D-NY) yesterday, July 23, initiated procedural steps that will likely lead to swift Senate passage of the Kids Online Safety Act (“KOSA”) and the Children and Teens’ Online Privacy Protection Act (“COPPA 2.0”).  Both bills have been under consideration in the Senate and the House of Representatives for some time, which we have previously covered.  Schumer’s action will likely bring the two bills in a single package to the Senate Floor as soon as Thursday, June 25. The future of the legislation in the House, however, is less certain.

KOSA, led by Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.), would, in its current form (S.1409), require specified “covered platforms” to implement new safeguards, tools, and transparency for minors under 17 online.  These covered platforms:

  • Would have a duty of care to prevent and mitigate enumerated harms.
  • Must have default safeguards for known minors, including tools that: limit the ability of others to communicate with minors; limit features that increase, sustain, or extend use of the platform by the minor; and control personalization systems.
  • Must provide “readily-accessible and easy-to-use settings for parents” to help manage a minor’s use of a platform.
  • Must provide specified notices and obtain verifiable parental consent for children under 13 to register for the service.

KOSA also requires government agencies to conduct research on minors’ use of online services, directs the Federal Trade Commission (“FTC”) to issue guidance for covered platforms on specific topics, and provides for the establishment of a Kids Online Safety Council.  The FTC and state attorneys general would have authority to enforce the law, which would take effect 18 months after it is enacted.

In a press conference yesterday, Blumenthal and Blackburn touted 70 bipartisan Senate cosponsors and called for quick Senate passage of the bill without further amendment.Continue Reading KOSA, COPPA 2.0 Likely to Pass U.S. Senate

On July 10, 2024, the U.S. Senate passed the Stopping Harmful Image Exploitation and Limiting Distribution (“SHIELD”) Act, which would criminalize the distribution of private sexually explicit or nude images online.  

Specifically, the legislation makes it unlawful to knowingly distribute a private intimate visual depiction of an individual

Continue Reading U.S. Senate Passes SHIELD Act to Criminalize Distribution of Private Intimate Images Online

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023