Artificial Intelligence (AI)

On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法(征求意见稿)》) (“draft Measures”) (official Chinese version available here) for public consultation.  The deadline for submitting comments is May 10, 2023.

The draft Measures would regulate generative Artificial Intelligence (“AI”) services that are “provided to the public in mainland China.”  These requirements cover a wide range of issues that are frequently debated in relation to the governance of generative AI globally, such as data protection, non-discrimination, bias and the quality of training data.  The draft Measures also highlight issues arising from the use of generative AI that are of particular concern to the Chinese government, such as content moderation, the completion of a security assessment for new technologies, and algorithmic transparency.  The draft Measures thus reflect the Chinese government’s objective to craft its own governance model for new technologies such as generative AI.

Further, and notwithstanding the requirements introduced by the draft Measures (as described in greater detail below), the text states that the government encourages the (indigenous) development of (and international cooperation in relation to) generative AI technology, and encourages companies to adopt “secure and trustworthy software, tools, computing and data resources” to that end. 

Notably, the draft Measures do not make a distinction between generative AI services offered to individual consumers or enterprise customers, although certain requirements appear to be more directed to consumer-facing services than enterprise services.

Continue Reading China Proposes Draft Measures to Regulate Generative AI

In August 2022, the Chips and Science Act—a massive, $280 billion bill to boost public and private sector investments in critical and emerging technologies—became law.  We followed the bill from the beginning and anticipated significant opportunities for industry to inform and influence the direction of the new law’s programs. 

One such opportunity is available now.  The U.S. Department of Commerce recently published a request for information (RFI) “to inform the planning and design of the Regional Technology and Innovation Hub (Tech Hubs) program.”  The public comment period ends March 16, 2023.

Background

The Chips and Science Act authorized $10 billion for the U.S. Department of Commerce to establish a Regional Technology and Innovation Hub (Tech Hubs) program.  Specifically, Commerce was charged with designating at least 20 Tech Hubs and awarding grants to consortia composed of one or more institutions of higher education, political subdivisions, state governments, and “industry or firms in relevant technology, innovation, or manufacturing sectors” to develop and deploy critical technologies in those hubs.  $500 million has already been made available for the program, and Commerce will administer the program through the Economic Development Administration (EDA).

Continue Reading Commerce Seeks Comments on Regional Tech Hubs Program

On 24 January 2023, the Italian Supervisory Authority (“Garante”) announced it fined three hospitals in the amount of 55,000 EUR each for their unlawful use an artificial intelligence (“AI”) system for risk stratification purposes, i.e., to systematically categorize patients based on their health status. The Garante also ordered the hospitals to erase all the data

This quarterly update summarizes key legislative and regulatory developments in the fourth quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.

Artificial Intelligence

In the last quarter of 2022, the annual National Defense Authorization Act (“NDAA”), which contained AI-related provisions, was enacted into law.  The NDAA creates a pilot program to demonstrate use cases for AI in government. Specifically, the Director of the Office of Management and Budget (“Director of OMB”) must identify four new use cases for the application of AI-enabled systems to support modernization initiatives that require “linking multiple siloed internal and external data sources.” The pilot program is also meant to enable agencies to demonstrate the circumstances under which AI can be used to modernize agency operations and “leverage commercially available artificial intelligence technologies that (i) operate in secure cloud environments that can deploy rapidly without the need to replace operating systems; and (ii) do not require extensive staff or training to build.” Finally, the pilot program prioritizes use cases where AI can drive “agency productivity in predictive supply chain and logistics,” such as predictive food demand and optimized supply, predictive medical supplies and equipment demand, predictive logistics for disaster recovery, preparedness and response.

At the state level, in late 2022, there were also efforts to advance requirements for AI used to make certain types of decisions under comprehensive privacy frameworks.  The Colorado Privacy Act draft rules were updated to clarify the circumstances that require controllers to provide an opt-out right for the use of automated decision-making and requirements for assessments of profiling decisions.  In California, although the California Consumer Privacy Act draft regulations do not yet cover automated decision-making, the California Privacy Protection Agency rules subcommittee provided a sample list of related questions concerning this during its December 16, 2022 board meeting.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Fourth Quarter 2022

Companies have increasingly leveraged artificial intelligence (“AI”) to facilitate decisions in the extension of credit and financial lending as well as hiring decisions.  AI tools have the potential to produce efficiencies in processes but have also recently faced scrutiny for AI-related environmental, social, and governance (“ESG”) risks.  Such risks include AI ethical issues related to the use of facial recognition technology or embedded biases in AI software that may potentially perpetuate racial inequality or have a discriminatory impact on minority communities.  ESG and diversity, equity, and inclusion (“DEI”) advocates, along with federal and state regulators, have begun to examine the potential benefit and harm of AI tools vis-à-vis such communities.  

            As federal and state authorities take stock of the use of AI, the benefits of “responsibly audited AI” has become a focal point and should be on companies’ radars.  This post defines “responsibly audited AI” as automated decision-making platforms or algorithms that companies have vetted for ESG-related risks, including but not limited to discriminatory impacts or embedded biases that might adversely impact marginalized and underrepresented communities.  By investing in responsibly audited AI, companies will be better positioned to comply with current and future laws or regulations geared toward avoiding discriminatory or biased outputs caused by AI decision-making tools.  Companies will also be better poised to achieve their DEI goals. 

Federal regulatory and legislative policy and AI decision-making tools

            There are several regulatory, policy, and legislative developments focused on the deployment of responsibly audited AI and other automated systems.  For example, as part of the Biden-Harris Administration’s recently announced Blueprint for an AI Bill of Rights, the Administration has highlighted key principles companies should consider in the design, development, and deployment of AI and automated systems in order to address AI-related biases that can impinge on the rights of the general public.

Continue Reading Responsibly Audited AI and the ESG/AI Nexus

Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes.  The use of AI for these purposes has been scrutinized and will now be regulated in New York City.  The New York City Department of Consumer

Policymakers and candidates of both parties have increased their focus on how technology is changing society, including by blaming platforms and other participants in the tech ecosystem for a range of social ills even while recognizing them as significant contributors to U.S. economic success globally.  Republicans and Democrats have significant interparty—and intraparty—differences in the form of their grievances and on many of the remedial measures to combat the purported harms.  Nonetheless, the growing inclination to do more on tech has apparently driven one key congressional committee to have compromised on previously intractable issues involving data privacy.  Rules around the use of algorithms and artificial intelligence, which have attracted numerous legislative proposals in recent years, may be the next area of convergence. 

While influential members of both parties have pointed to the promise and peril of the increasing role of algorithms and artificial intelligence in American life, they have tended to raise different concerns.  Legislative proposals from Democrats have frequently focused how deployment of algorithms and artificial intelligence affects protected classes, while Republican proposals have largely, but not exclusively, been aimed at perceived unfairness in how algorithms treat Republicans and those expressing conservative views.  For instance, Republican Whip John Thune (R-SD), the former chair of the Senate Committee on Commerce, Science, and Transportation, has sponsored the Political BIAS Emails Act (S. 4409), which would address technology companies reportedly filtering Republican campaign emails.  Meanwhile, Senator Ron Wyden (D-OR) introduced the Algorithmic Accountability Act (S. 3572) that, among other things, requires that “automated decision systems” be subject to an “evaluation of any differential performance associated with consumers’ race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status.”

Continue Reading ARTIFICIAL INTELLIGENCE AND ALGORITHMS IN THE NEXT CONGRESS

On July 13, the Federal Trade Commission published a notice of proposed rulemaking regarding the Motor Vehicle Dealers Trade Regulation Rule.  The Motor Vehicle Dealers Trade Regulation Rule is aimed at combating certain unfair and deceptive trade practices by dealers and promoting pricing transparency.  Comments to the proposed rule are due on or before September 12, 2022

cav

The proposed rule:

  1. Prohibits dealers from making certain misrepresentations in the sales process, enumerated in proposed § 463.3.  The list of prohibited misrepresentations includes misrepresentations regarding the “costs or terms of purchasing, financing, or leasing a vehicle” or “any costs, limitation, benefit, or any other Material aspect of an Add-on Product or Service.”
  • Includes new disclosure requirements regarding pricing, financing and add-on products and services.  Notably, the proposed rule would obligate dealers to disclose the offering price in many advertisements and communications with consumers.
  • Prohibits charges for add-on products and services that confer no benefit to the consumer and prohibits charges for items without “Express, Informed Consent” from the consumer (which, notably, as defined, excludes any “signed or initialed document, by itself”).  The proposed rule outlines a specific process for presenting charges for add-on products and services to the consumer, which obligates the dealer to disclose and offer to close the transaction for the “Cash Price without Optional Add-Ons” and obtain confirmation in writing that the consumer has rejected that price.
  • Imposes additional record-keeping requirements on the dealer, in order to demonstrate compliance with the rule.  The record-keeping requirements apply for a period of 24 months from the date the applicable record is created.


Continue Reading FTC Proposes Motor Vehicle Dealers Trade Regulation Rule

This quarterly update summarizes key federal legislative and regulatory developments in the second quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things, connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in U.S. state legislatures.  To summarize, in the second quarter of 2022, Congress and the Administration focused on addressing algorithmic bias and other AI-related risks and introduced a bipartisan federal privacy bill.

Artificial Intelligence

Federal lawmakers introduced legislation in the second quarter of 2022 aimed at addressing risks in the development and use of AI systems, in particular risks related to algorithmic bias and discrimination.  Senator Michael Bennet (D-CO) introduced the Digital Platform Commission Act of 2022 (S. 4201), which would empower a new federal agency, the Federal Digital Platform Commission, to develop regulations for online platforms that facilitate interactions between consumers, as well as between consumers and entities offering goods and services.  Regulations contemplated by the bill include requirements that algorithms used by online platforms “are fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias.”  Although this bill does not appear to have the support to be passed in this Congress, it is emblematic of the concerns in Congress that might later lead to legislation.

Additionally, the bipartisan American Data Privacy and Protection Act (H.R. 8152), introduced by a group of lawmakers led by Representative Frank Pallone (D-NJ-6), would require “large data holders” (defined as covered entities and service providers with over $250 million in gross annual revenue that collect, process, or transfer the covered data of over five million individuals or the sensitive covered data of over 200,000 individuals) to conduct “algorithm impact assessments” on algorithms that “may cause potential harm to an individual.”  These assessments would be required to provide, among other information, details about the design of the algorithm and the steps the entity is taking to mitigate harms to individuals.  Separately, developers of algorithms would be required to conduct “algorithm design evaluations” that evaluate the design, structure, and inputs of the algorithm.  The American Data Privacy and Protection Act is discussed in further detail in the Data Privacy section below.

Continue Reading U.S. AI, IoT, CAV, and Data Privacy Legislative and Regulatory Update – Second Quarter 2022

Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

I. ICO Fines Clearview AI £7.5m

In the past year, Clearview has been subject to investigations into its data processing activities by the French and Italian authorities, and a joint investigation by the ICO and the Australian Information Commissioner. All four regulators held that Clearview’s processing of biometric data scraped from over 20 billion facial images from across the internet, including from social media sites, breached data protection laws.

On 26 May 2022, the ICO released its monetary penalty notice and enforcement notice against Clearview. The ICO concluded that Clearview’s activities infringed a number of the GDPR and UK GDPR’s provisions, including:

  • Failing to process data in a way that is fair and transparent under Article 5(1)(a) GDPR. The ICO concluded that people were not made aware or would not reasonably expect their images to be scraped, added to a worldwide database, and made available to a wide range of customers for the purpose of matching images on the company’s database.
  • Failing to process data in a way that is lawful under the GDPR. The ICO ruled that Clearview’s processing did not meet any of the conditions for lawful processing set out in Article 6, nor, for biometric data, in Article 9(2) GDPR.
  • Failing to have a data retention policy and thus being unable to ensure that personal data are not retained for longer than necessary under Article 5(1)(e) GDPR. There was no indication as to when (or whether) any images are ever removed from Clearview’s database.
  • Failing to provide data subjects with the necessary information under Article 14 GDPR. According to the ICO’s investigation, the only way in which data subjects could obtain that information was by contacting Clearview and directly requesting it.
  • Impeding the exercise of data subject rights under Articles 15, 16, 17, 21 and 22 GDPR. In order to exercise these rights, data subjects needed to provide Clearview with additional personal data, by providing a photograph of themselves that can be matched against the Clearview Database.
  • Failing to conduct a Data Protection Impact Assessment (“DPIA”) under Article 35 GDPR. The ICO found that Clearview failed at any time to conduct a DPIA in respect of its processing of the personal data of UK residents.


Continue Reading Facial Recognition Update: UK ICO Fines Clearview AI £7.5m & EDPB Adopts Draft Guidelines on Use of FRT by Law Enforcement