Photo of Nicholas Xenakis

Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving antitrust, artificial intelligence, bankruptcy, criminal justice, financial services, immigration, intellectual property, life sciences, national security, and technology. He also represents companies and individuals in investigations before U.S. Senate and House Committees.

Nick previously served as General Counsel for the U.S. Senate Judiciary Committee, where he managed committee staff and directed legislative efforts. He also participated in key judicial and Cabinet confirmations, including of Attorneys General and Supreme Court Justices. Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia.

In late August, the California legislature passed two bills that would limit the creation or use of “digital replicas,” making California the latest state to seek new protections for performers, artists, and other employees in response to the rise of AI-generated content.  These state efforts come as Congress considers the

Continue Reading California Passes Digital Replica Legislation as Congress Considers Federal Approach

This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I. Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Second Quarter 2024

U.S. Senate Majority Leader Chuck Schumer (D-NY) yesterday, July 23, initiated procedural steps that will likely lead to swift Senate passage of the Kids Online Safety Act (“KOSA”) and the Children and Teens’ Online Privacy Protection Act (“COPPA 2.0”).  Both bills have been under consideration in the Senate and the House of Representatives for some time, which we have previously covered.  Schumer’s action will likely bring the two bills in a single package to the Senate Floor as soon as Thursday, June 25. The future of the legislation in the House, however, is less certain.

KOSA, led by Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.), would, in its current form (S.1409), require specified “covered platforms” to implement new safeguards, tools, and transparency for minors under 17 online.  These covered platforms:

  • Would have a duty of care to prevent and mitigate enumerated harms.
  • Must have default safeguards for known minors, including tools that: limit the ability of others to communicate with minors; limit features that increase, sustain, or extend use of the platform by the minor; and control personalization systems.
  • Must provide “readily-accessible and easy-to-use settings for parents” to help manage a minor’s use of a platform.
  • Must provide specified notices and obtain verifiable parental consent for children under 13 to register for the service.

KOSA also requires government agencies to conduct research on minors’ use of online services, directs the Federal Trade Commission (“FTC”) to issue guidance for covered platforms on specific topics, and provides for the establishment of a Kids Online Safety Council.  The FTC and state attorneys general would have authority to enforce the law, which would take effect 18 months after it is enacted.

In a press conference yesterday, Blumenthal and Blackburn touted 70 bipartisan Senate cosponsors and called for quick Senate passage of the bill without further amendment.Continue Reading KOSA, COPPA 2.0 Likely to Pass U.S. Senate

On July 10, 2024, the U.S. Senate passed the Stopping Harmful Image Exploitation and Limiting Distribution (“SHIELD”) Act, which would criminalize the distribution of private sexually explicit or nude images online.  

Specifically, the legislation makes it unlawful to knowingly distribute a private intimate visual depiction of an individual

Continue Reading U.S. Senate Passes SHIELD Act to Criminalize Distribution of Private Intimate Images Online

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023

The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.Continue Reading Spotlight Series on Global AI Policy — Part II: U.S. Legislative and Regulatory Developments

This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.

Artificial Intelligence

AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023.  Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI.  Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation.  The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here.  There were also a number of AI legislative proposals introduced this quarter.  Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems.  Other proposals focus on mandating disclosures of AI systems.  For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage.  Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”

There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:

  • White House:  The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here.  The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
  • CFPB:  The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
  • FTC:  The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
  • HHS Office of National Coordinator for Health IT:  This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies.  The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI.  Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.

Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023

On June 29, 2023, the Federal Trade Commission (“FTC”) posted a blog to its website expressing concerns about the recent rise of generative artificial intelligence (“generative AI”). To get ahead of this rapidly developing technology, the FTC identified “the essential building blocks” of generative AI and highlighted some business practices the agency would consider “unfair methods of competition.” The FTC also underscored technological aspects unique to generative AI that could raise competition concerns.

What is Generative AI?

Traditional AI has existed in the marketplace for years and largely assisted users in analyzing or manipulating existing data.  Generative AI, on the other hand, represents a significant advance with its ability to generate entirely new text, images, audio, and video. The FTC notes that this content is frequently “indistinguishable from content crafted directly by humans.”

What are the “essential building blocks” of generative AI?

The FTC identified three “essential building blocks” that companies need to develop generative AI. Without fair access to the necessary inputs, the FTC warns that competition and the ability for new players to enter the market will suffer.

  • Data. Generative AI models require access to vast amounts of data, particularly in the early phases where models build up a robust competency in a specific domain (for example, text or images). Market incumbents may possess an inherent advantage because of access to data collected over many years. The FTC notes that while “simply having large amounts of data is not unlawful,” creating undue barriers to access that data may be considered unfair competition.

Continue Reading The Federal Trade Commission and Generative AI Competition Concerns

The American Music Fairness Act (“AMFA”) has been re-introduced in the Senate for this Congress.  Sen. Padilla (D-CA) introduced the bill (S.253) earlier this month, along with Sens. Blackburn (R-TN), Tillis (R-NC), and Feinstein (D-CA).  The bill was referred to the Judiciary Committee, on which every cosponsor serves. 

Continue Reading THE AMERICAN MUSIC FAIRNESS ACT RETURNS TO THE SENATE

On Tuesday, February 14, 2023, the Senate Judiciary Committee held a hearing titled “Protecting Our Children Online.”  The witnesses included only consumer advocates, and no industry representatives.  As Committee Chair, however, Senator Durbin (D-IL) indicated that he plans to hold another hearing featuring representatives from technology companies.

The key takeaway was that there continues to be strong bipartisan support for passing legislation that addresses privacy and online safety for minors.  Both Senator Durbin and Senator Graham (R-SC), the Committee’s Ranking Member, were in agreement that the Committee will mark up relevant legislation, which could happen within the next six months—making the next couple months particularly important for negotiations.  Notably, all of the previously introduced legislation that was discussed had passed at least its respective Senate Committee last Congress.

Senators focused on four bills that could be included as part of a legislative package:

  1. Kids Online Safety Act (KOSA) (to be reintroduced).  KOSA would apply to “covered platforms,” which the previous bill defined as a “commercial software application or electronic service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  Among other things, KOSA would impose a duty of care on covered platforms that would require them to “prevent and mitigate the heightened risks of physical, emotional, developmental, or material harms to minors posed by materials” on the platform.

Continue Reading Senate Judiciary Committee Holds Hearing on Children’s Online Safety