Photo of Molly Prindle

Molly Prindle

Molly Prindle is an associate in the firm’s Washington, DC office, where she is a member of the Litigation and Investigations Practice Group.

Prior to joining the firm, Molly clerked for Judge Ronald L. Gilman of the U.S. Court of Appeals for the Sixth Circuit and Chief Judge Mark R. Hornak of the U.S. District Court for the Western District of Pennsylvania. Molly earned her J.D. from American University Washington College of Law, where she served as Editor-in-Chief of the American University Law Review.

Companies have increasingly leveraged artificial intelligence (“AI”) to facilitate decisions in the extension of credit and financial lending as well as hiring decisions.  AI tools have the potential to produce efficiencies in processes but have also recently faced scrutiny for AI-related environmental, social, and governance (“ESG”) risks.  Such risks include AI ethical issues related to the use of facial recognition technology or embedded biases in AI software that may potentially perpetuate racial inequality or have a discriminatory impact on minority communities.  ESG and diversity, equity, and inclusion (“DEI”) advocates, along with federal and state regulators, have begun to examine the potential benefit and harm of AI tools vis-à-vis such communities.  

            As federal and state authorities take stock of the use of AI, the benefits of “responsibly audited AI” has become a focal point and should be on companies’ radars.  This post defines “responsibly audited AI” as automated decision-making platforms or algorithms that companies have vetted for ESG-related risks, including but not limited to discriminatory impacts or embedded biases that might adversely impact marginalized and underrepresented communities.  By investing in responsibly audited AI, companies will be better positioned to comply with current and future laws or regulations geared toward avoiding discriminatory or biased outputs caused by AI decision-making tools.  Companies will also be better poised to achieve their DEI goals. 

Federal regulatory and legislative policy and AI decision-making tools

            There are several regulatory, policy, and legislative developments focused on the deployment of responsibly audited AI and other automated systems.  For example, as part of the Biden-Harris Administration’s recently announced Blueprint for an AI Bill of Rights, the Administration has highlighted key principles companies should consider in the design, development, and deployment of AI and automated systems in order to address AI-related biases that can impinge on the rights of the general public.Continue Reading Responsibly Audited AI and the ESG/AI Nexus