Artificial Intelligence (AI)

On May 23, 2023, the White House announced that it took the following steps to further advance responsible Artificial Intelligence (“AI”) practices in the U.S.:

  • the Office of Science and Technology Policy (“OSTP”) released an updated strategic plan that focuses on federal investments in AI research and development (“R&D”);
  • OSTP issued a new request for information (“RFI”) on critical AI issues; and
  • the Department of Education issued a new report on risks and opportunities related to AI in education.

These announcements build on other recent actions by the Administration in connection with AI, such as the announcement earlier this month regarding new National Science Foundation funding for AI research institutions and meetings with AI providers.

This post briefly summarizes the actions taken in the White House’s most recent announcement.

Updated OSTP Strategic Plan

The updated OSTP strategic plan defines major research challenges in AI to coordinate and focus federal R&D investments.  The plan aims to ensure continued U.S. leadership in the development and use of trustworthy AI systems, prepare the current and future U.S. workforce for the integration of AI systems across all sectors, and coordinate ongoing AI activities across agencies.

The plan as updated identifies nine strategies:

Continue Reading White House Announces New Efforts to Advance Responsible AI Practices

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.

Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

Continue Reading China Proposes Draft Measures to Regulate Generative AI

In August 2022, the Chips and Science Act—a massive, $280 billion bill to boost public and private sector investments in critical and emerging technologies—became law.  We followed the bill from the beginning and anticipated significant opportunities for industry to inform and influence the direction of the new law’s programs. 

One such opportunity is available now.  The U.S. Department of Commerce recently published a request for information (RFI) “to inform the planning and design of the Regional Technology and Innovation Hub (Tech Hubs) program.”  The public comment period ends March 16, 2023.

Background

The Chips and Science Act authorized $10 billion for the U.S. Department of Commerce to establish a Regional Technology and Innovation Hub (Tech Hubs) program.  Specifically, Commerce was charged with designating at least 20 Tech Hubs and awarding grants to consortia composed of one or more institutions of higher education, political subdivisions, state governments, and “industry or firms in relevant technology, innovation, or manufacturing sectors” to develop and deploy critical technologies in those hubs.  $500 million has already been made available for the program, and Commerce will administer the program through the Economic Development Administration (EDA).

Continue Reading Commerce Seeks Comments on Regional Tech Hubs Program

On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals to erase all the

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Artificial Intelligence

In the last quarter of 2022, the annual National Defense Authorization Act (“NDAA”), which contained AI-related provisions, was enacted into law.  The NDAA creates a pilot program to demonstrate use cases for AI in government. Specifically, the Director of the Office of Management and Budget (“Director of OMB”) must identify four new use cases for the application of AI-enabled systems to support modernization initiatives that require “linking multiple siloed internal and external data sources.” The pilot program is also meant to enable agencies to demonstrate the circumstances under which AI can be used to modernize agency operations and “leverage commercially available artificial intelligence technologies that (i) operate in secure cloud environments that can deploy rapidly without the need to replace operating systems; and (ii) do not require extensive staff or training to build.” Finally, the pilot program prioritizes use cases where AI can drive “agency productivity in predictive supply chain and logistics,” such as predictive food demand and optimized supply, predictive medical supplies and equipment demand, predictive logistics for disaster recovery, preparedness and response.

At the state level, in late 2022, there were also efforts to advance requirements for AI used to make certain types of decisions under comprehensive privacy frameworks.  The Colorado Privacy Act draft rules were updated to clarify the circumstances that require controllers to provide an opt-out right for the use of automated decision-making and requirements for assessments of profiling decisions.  In California, although the California Consumer Privacy Act draft regulations do not yet cover automated decision-making, the California Privacy Protection Agency rules subcommittee provided a sample list of related questions concerning this during its December 16, 2022 board meeting.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022

Companies have increasingly leveraged artificial intelligence (“AI”) to facilitate decisions in the extension of credit and financial lending as well as hiring decisions.  AI tools have the potential to produce efficiencies in processes but have also recently faced scrutiny for AI-related environmental, social, and governance (“ESG”) risks.  Such risks include AI ethical issues related to the use of facial recognition technology or embedded biases in AI software that may potentially perpetuate racial inequality or have a discriminatory impact on minority communities.  ESG and diversity, equity, and inclusion (“DEI”) advocates, along with federal and state regulators, have begun to examine the potential benefit and harm of AI tools vis-à-vis such communities.  

            As federal and state authorities take stock of the use of AI, the benefits of “responsibly audited AI” has become a focal point and should be on companies’ radars.  This post defines “responsibly audited AI” as automated decision-making platforms or algorithms that companies have vetted for ESG-related risks, including but not limited to discriminatory impacts or embedded biases that might adversely impact marginalized and underrepresented communities.  By investing in responsibly audited AI, companies will be better positioned to comply with current and future laws or regulations geared toward avoiding discriminatory or biased outputs caused by AI decision-making tools.  Companies will also be better poised to achieve their DEI goals. 

Federal regulatory and legislative policy and AI decision-making tools

            There are several regulatory, policy, and legislative developments focused on the deployment of responsibly audited AI and other automated systems.  For example, as part of the Biden-Harris Administration’s recently announced Blueprint for an AI Bill of Rights, the Administration has highlighted key principles companies should consider in the design, development, and deployment of AI and automated systems in order to address AI-related biases that can impinge on the rights of the general public.

Continue Reading Responsibly Audited AI and the ESG/AI Nexus

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer

Policymakers and candidates of both parties have increased their focus on how technology is changing society, including by blaming platforms and other participants in the tech ecosystem for a range of social ills even while recognizing them as significant contributors to U.S. economic success globally.  Republicans and Democrats have significant interparty—and intraparty—differences in the form of their grievances and on many of the remedial measures to combat the purported harms.  Nonetheless, the growing inclination to do more on tech has apparently driven one key congressional committee to have compromised on previously intractable issues involving data privacy.  Rules around the use of algorithms and artificial intelligence, which have attracted numerous legislative proposals in recent years, may be the next area of convergence. 

While influential members of both parties have pointed to the promise and peril of the increasing role of algorithms and artificial intelligence in American life, they have tended to raise different concerns.  Legislative proposals from Democrats have frequently focused how deployment of algorithms and artificial intelligence affects protected classes, while Republican proposals have largely, but not exclusively, been aimed at perceived unfairness in how algorithms treat Republicans and those expressing conservative views.  For instance, Republican Whip John Thune (R-SD), the former chair of the Senate Committee on Commerce, Science, and Transportation, has sponsored the Political BIAS Emails Act (S. 4409), which would address technology companies reportedly filtering Republican campaign emails.  Meanwhile, Senator Ron Wyden (D-OR) introduced the Algorithmic Accountability Act (S. 3572) that, among other things, requires that “automated decision systems” be subject to an “evaluation of any differential performance associated with consumers’ race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status.”

Continue Reading ARTIFICIAL INTELLIGENCE AND ALGORITHMS IN THE NEXT CONGRESS