On September 6, Senator Bill Cassidy (R-LA), the Ranking Member of the U.S. Senate Health, Education, Labor and Pensions (HELP) Committee, issued a white paper about the oversight and legislative role of Congress related to the deployment of artificial intelligence (AI) in areas under the HELP Committee’s jurisdiction, including health and life sciences. In the white paper, Senator Cassidy disfavors a one-size-fits-all approach to the regulation of AI and instead calls for a flexible approach that leverages existing frameworks depending on the particular context of use of AI. “[O]nly if our current frameworks are unable to accommodate . . . AI, should Congress look to create new ones or modernize existing ones.” The Senator seeks public feedback on the white paper by September 22, 2023. Health care and life sciences stakeholders should consider providing comments.
This blog outlines five key takeaways from the white paper from a health care and life sciences perspective. Note that beyond health and life sciences issues, the white paper also addresses considerations for other areas, such as use of AI in educational settings and labor/employment implications created by use of AI.
5 Key Takeaways for AI in Health Care and Life Sciences
The white paper – entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor” – describes the “enormous good” that AI in health care presents, such as “the potential to help create new cures, improve care, and reduce administrative burdens and overall health care spending.” At the same time, Senator Cassidy notes that AI presents risks that legal frameworks should seek to minimize. Five key takeaways from the white paper include:
- Senator Cassidy emphasizes that a one-size-fits all approach will not work for AI and effectively grounds many of the broader, ongoing AI policy considerations into the HELP Committee’s core expertise and existing regulatory frameworks for health-related AI. Many of the emerging frameworks for defining trustworthy or responsible AI and establishing AI risk management practices are drafted broadly for all uses of AI and may not reflect that AI applications in different sectors present unique challenges. Leveraging the HELP Committee’s expertise around health regulatory frameworks (as well as other sectors within the Committee’s jurisdiction and expertise), the white paper concludes that the context of use greatly affects how policymakers should think about AI’s benefits and risks. In other words, the white paper recognizes that AI deployed in healthcare settings requires a different regulatory approach compared to AI deployed in educational or employment settings. Senator Cassidy expresses that a “sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation.”
- Changes to FDA’s device framework may be on the horizon. The white paper expresses that Congress should only look to modernize existing frameworks (or create new ones) if current frameworks are unable to accommodate AI. For example, the white paper acknowledges that the existing framework for preclinical and clinical investigation of new drugs is “generally well-suited to adapt to the use of AI to research and develop new drugs.” In contrast, Senator Cassidy specifically notes that FDA’s medical device framework was not designed to accommodate AI that may improve over time, signaling potential future action by the HELP Committee to amend relevant provisions of the Federal Food, Drug, and Cosmetic Act (FDCA) to clarify how FDA will treat medical devices that integrate AI.
- There are a variety of applications of AI that will benefit the healthcare and life sciences sector and, ultimately, the public health, including:
- Pharmaceutical research and development, such as with disease target and drug candidate identification and/or design;
- Diagnostic and treatment applications, from early disease detection to AI applications intended to help identify and reduce medical errors;
- Patient- and provider-facing support, including internally-developed clinical decision support (CDS) algorithms and AI interfaces that engage directly with patients;
- Health care administration and coverage, including claims management, surgical scheduling, generation of replies to patient messages, summarization of patient medical histories, and translation between languages and reading levels for patient materials; and
- Use of AI to increase the speed and efficiency of FDA’s review processes.
- The acknowledgement of these important use cases in the health and life sciences sector leaves open FDA regulatory questions. For example:
- As noted above, the white paper is fairly explicit on the point that changes to FDA’s regulatory framework may be required to address AI, but Senator Cassidy leaves open for comment what specific types of changes might need to be made.
- For AI that does not meet the definition of a medical device (or is subject to enforcement discretion by FDA), Senator Cassidy leaves open for comment how health-related AI should be regulated (e.g., who is responsible for training clinicians before use of certain AI tools described in the white paper, and what standards does such training need to meet).
- FDA expertise will be critical as AI plays a larger role in health and life sciences, and Senator Cassidy leaves open for comment how Congress should help FDA address these challenges.
- Where FDA incorporates AI into its own work, including premarket review processes, the white paper leaves open how sponsors and the public will know what review elements are being performed by AI and whether a unique process will be needed to appeal AI-based decisions within the Agency.
- Bias and transparency continue to be front-burner issues. The discussion of bias and transparency in the white paper confirms that Congress is still focused on how to manage these issues in AI regulation. The white paper states that AI tools should be developed in a transparent way that provides an understanding about how any given algorithm was designed, but leaves open for comment what specific guidelines and steps should satisfy this need. The white paper also notes how any framework must build in a “clear method to measure effectiveness” and that Congress may need to consider how to best ensure that AI-enabled products do not give undue weight to potential biases.
Bonus Takeaway: Healthcare applications for AI may create ambiguities about liability. The white paper states that stakeholders need a clear understanding of potential liability around the use of AI. Specifically, the white paper highlights open questions about liability assignment between the original developer, the most recent developer, clinicians, or others.
Request for Stakeholder Feedback
Recognizing that the “insights of stakeholders that can describe the advantages and drawbacks of AI in our health care system . . . are critical as policy makers grapple with this topic,” Senator Cassidy requests “feedback and comments for ways to improve the framework in which these technologies are developed, reviewed, and used” by Friday, September 22. Although feedback is not confined to these topics, the white paper poses the following questions for consideration specific to health care:
Supporting Medical Innovation:
- How can FDA support the use of AI to design and develop new drugs and biologics?
- What updates to the regulatory frameworks for drugs and biologics should Congress consider to facilitate innovation in AI applications?
- How can FDA improve the use of AI in medical devices?
- What updates to the regulatory frameworks for medical devices should Congress consider to facilitate innovation in AI applications while also ensuring that products are safe and effective for patients?
- How can Congress help FDA ensure that it has access to the expertise required to review products that are developed using AI or that incorporate AI?
- How can FDA better leverage AI to review product submissions?
- How can FDA harness external expertise to support review of products that are developed using AI or that incorporate AI?
- What are the potential consequences of regulating AI in the United States if it remains unregulated in other countries?
Medical Ethics and Protecting Patients:
- What existing standards are in place to demonstrate clinical validity when leveraging AI? What gaps exist in those standards?
- What practices are in place to mitigate bias in AI decision-making?
- What should be the federal role, if any, in addressing social and/or political bias?
- How can AI be best adopted to not inappropriately deny patients care?
- Is the current HIPAA framework equipped to safeguard patient privacy with regards to AI in clinical settings? If not, how not or how to better equip the framework?
- What standards are in place to ensure that AI maintains respect and dignity for human life from conception to natural death?
- Who should be responsible for determining safe and appropriate applications of AI algorithms?
- Who should be liable for unsafe or inappropriate applications of AI algorithms? The developer? A regulating body? A third party or private entity?