Only one claim survived dismissal in a recent putative class action lawsuit alleging that a pathology laboratory failed to safeguard patient data in a cyberattack. See Order Granting Motion to Dismiss in Part, Thai v. Molecular Pathology Laboratory Network, Inc., No. 3:22-CV-315-KAC-DCP (E.D. Tenn. Sep. 29, 2023), ECF 38.
Continue Reading All but One Claim in Pathology Lab Data Breach Class Action Tossed on Motion to DismissTechnology
Senate Whitepaper Addresses AI in the Workplace
On September 6, 2023, U.S. Senator Bill Cassidy, ranking member of the Senate Health, Education, Labor and Pensions (HELP) Committee, published a white paper addressing artificial intelligence (AI) and its potential benefits and risks in the workplace, as well as in the health care context, which we discuss here.…
Continue Reading Senate Whitepaper Addresses AI in the WorkplaceFEC Seeks Comment on AI Petition After Earlier Deadlock, But New Rules Remain Elusive
The Federal Election Commission (FEC) officially dipped its toes into the ongoing national debate around artificial intelligence (AI) regulation, publishing a Federal Register notice seeking comment on a petition submitted by Public Citizen to initiate a rulemaking to clarify that the Federal Election Campaign Act (FECA) prohibits deceptive AI-generated campaign…
Continue Reading FEC Seeks Comment on AI Petition After Earlier Deadlock, But New Rules Remain ElusiveU.S. Tech Legislative & Regulatory Update – Second Quarter 2023
This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.
Artificial Intelligence
AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023. Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI. Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation. The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here. There were also a number of AI legislative proposals introduced this quarter. Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems. Other proposals focus on mandating disclosures of AI systems. For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage. Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”
There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:
- White House: The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here. The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
- CFPB: The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
- FTC: The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
- HHS Office of National Coordinator for Health IT: This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies. The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI. Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.
Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023
The Federal Trade Commission and Generative AI Competition Concerns
On June 29, 2023, the Federal Trade Commission (“FTC”) posted a blog to its website expressing concerns about the recent rise of generative artificial intelligence (“generative AI”). To get ahead of this rapidly developing technology, the FTC identified “the essential building blocks” of generative AI and highlighted some business practices the agency would consider “unfair methods of competition.” The FTC also underscored technological aspects unique to generative AI that could raise competition concerns.
What is Generative AI?
Traditional AI has existed in the marketplace for years and largely assisted users in analyzing or manipulating existing data. Generative AI, on the other hand, represents a significant advance with its ability to generate entirely new text, images, audio, and video. The FTC notes that this content is frequently “indistinguishable from content crafted directly by humans.”
What are the “essential building blocks” of generative AI?
The FTC identified three “essential building blocks” that companies need to develop generative AI. Without fair access to the necessary inputs, the FTC warns that competition and the ability for new players to enter the market will suffer.
- Data. Generative AI models require access to vast amounts of data, particularly in the early phases where models build up a robust competency in a specific domain (for example, text or images). Market incumbents may possess an inherent advantage because of access to data collected over many years. The FTC notes that while “simply having large amounts of data is not unlawful,” creating undue barriers to access that data may be considered unfair competition.
Continue Reading The Federal Trade Commission and Generative AI Competition Concerns
Senator Schumer Unveils New Two-Part Proposal to Regulate AI
Today, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal to develop legislation to promote and regulate artificial intelligence. In a speech at the Center for Strategic & International Studies, Leader Schumer remarked: “[W]ith AI, we cannot be ostriches sticking our heads in the sand. The question…
Continue Reading Senator Schumer Unveils New Two-Part Proposal to Regulate AIEU and US Lawmakers Agree to Draft AI Code of Conduct
On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code…
Continue Reading EU and US Lawmakers Agree to Draft AI Code of ConductWhite House Announces New Efforts to Advance Responsible AI Practices
On May 23, 2023, the White House announced that it took the following steps to further advance responsible Artificial Intelligence (“AI”) practices in the U.S.:
- the Office of Science and Technology Policy (“OSTP”) released an updated strategic plan that focuses on federal investments in AI research and development (“R&D”);
- OSTP issued a new request for information (“RFI”) on critical AI issues; and
- the Department of Education issued a new report on risks and opportunities related to AI in education.
These announcements build on other recent actions by the Administration in connection with AI, such as the announcement earlier this month regarding new National Science Foundation funding for AI research institutions and meetings with AI providers.
This post briefly summarizes the actions taken in the White House’s most recent announcement.
Updated OSTP Strategic Plan
The updated OSTP strategic plan defines major research challenges in AI to coordinate and focus federal R&D investments. The plan aims to ensure continued U.S. leadership in the development and use of trustworthy AI systems, prepare the current and future U.S. workforce for the integration of AI systems across all sectors, and coordinate ongoing AI activities across agencies.
The plan as updated identifies nine strategies:Continue Reading White House Announces New Efforts to Advance Responsible AI Practices
EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI
On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative…
Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AIDOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI
On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems.
The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI. Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use.
The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI. Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities. For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document. Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.
After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.
- Data and Datasets: The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets. The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
- Model Opacity and Access: The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
- Design and Use: The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.
We will continue to monitor these and related developments across our blogs.Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI