Photo of Jemie Fofanah

Jemie Fofanah

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal defense and family law.

New Jersey and New Hampshire are the latest states to pass comprehensive privacy legislation, joining CaliforniaVirginiaColoradoConnecticutUtahIowaIndiana, Tennessee, Montana, OregonTexasFlorida, and Delaware.  Below is a summary of key takeaways. 

New Jersey

On January 8, 2024, the New Jersey state senate passed S.B. 332 (“the Act”), which was signed into law on January 16, 2024.  The Act, which takes effect 365 days after enactment, resembles the comprehensive privacy statutes in Connecticut, Colorado, Montana, and Oregon, though there are some notable distinctions. 

  • Scope and Applicability:  The Act will apply to controllers that conduct business or produce products or services in New Jersey, and, during a calendar year, control or process either (1) the personal data of at least 100,000 consumers, excluding personal data processed for the sole purpose of completing a transaction; or (2) the personal data of at least 25,000 consumers where the business derives revenue, or receives a discount on the price of any goods or services, from the sale of personal data. The Act omits several exemptions present in other state comprehensive privacy laws, including exemptions for nonprofit organizations and information covered by the Family Educational Rights and Privacy Act.
  • Consumer Rights:  Consumers will have the rights of access, deletion, portability, and correction under the Act.  Moreover, the Act will provide consumers with the right to opt out of targeted advertising, the sale of personal data, and profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.  The Act will require controllers to develop a universal opt out mechanism by which consumers can exercise these opt out rights within six months of the Act’s effective date.
  • Sensitive Data:  The Act will require consent prior to the collection of sensitive data. “Sensitive data” is defined to include, among other things, racial or ethnic origin, religious beliefs, mental or physical health condition, sex life or sexual orientation, citizenship or immigration status, status as transgender or non-binary, and genetic or biometric data.  Notably, the Act is the first comprehensive privacy statute other than the California Consumer Privacy Act to include financial information in its definition of sensitive data.  The Act defines financial information as an “account number, account log-in, financial account, or credit or debit card number, in combination with any required security code, access code, or password that would permit access to a consumer’s financial account.”
  • Opt-In Consent for Certain Processing of Personal Data Concerning Teens:  Unless a controller obtains a consumer’s consent, the Act will prohibit the controller from processing personal data for targeted adverting, sale, or profiling where the controller has actual knowledge, or willfully disregards, that the consumer is between the ages of 13 and 16 years old.
  • Enforcement and Rulemaking:  The Act grants the New Jersey Attorney General enforcement authority.  The Act also provides controllers with a 30-day right to cure for certain violations, which will sunset eighteen months after the Act’s effective date.  Like the comprehensive privacy laws in California and Colorado, the Act authorizes rulemaking under the state Administrative Procedure Act.  Specifically, the Act requires the Director of the Division of Consumer Affairs in the Department of Law and Public Safety to promulgate rules and regulations pursuant to the Administrative Procedure Act that are necessary to effectuate the Act’s provisions.  

Continue Reading New Jersey and New Hampshire Pass Comprehensive Privacy Legislation

On January 24, 2024, the U.S. National Science Foundation (“NSF”) announced the launch of the National Artificial Intelligence Research Resource (“NAIRR”) pilot, a two-year initiative to develop a shared national research infrastructure for responsible AI discovery and innovation. The launch makes progress on a goal in President Biden’s recent Executive Order on AI safety and

This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues.  These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity.  As noted below, some of these developments provide companies with the opportunity for participation and comment.

I. Artificial Intelligence

Federal Executive Developments on AI

The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence.  The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government.  The EO builds on the White House’s prior work surrounding the development of responsible AI.  Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools).  Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination.  The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.

Federal Legislative Activity on AI

Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future.  For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.

  • Deepfakes and Inauthentic Content:  In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted. 
  • Research:  In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI. 
  • Transparency for Foundational Models:  In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies.  The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
  • Bipartisan Senate Forums:  Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter.  As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Fourth Quarter 2023

This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.

Artificial Intelligence

AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023.  Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI.  Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation.  The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here.  There were also a number of AI legislative proposals introduced this quarter.  Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems.  Other proposals focus on mandating disclosures of AI systems.  For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage.  Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”

There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:

  • White House:  The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here.  The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
  • CFPB:  The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
  • FTC:  The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
  • HHS Office of National Coordinator for Health IT:  This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies.  The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI.  Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.

Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023

Last week, FCC Chairwoman Jessica Rosenworcel announced the creation of a new Privacy and Data Protection Task Force (the “Task Force”) to demonstrate the agency’s commitment to protecting consumer data and ensuring that the telecommunications industry remains secure from threat actors.

The Task Force will be led by Enforcement Bureau Chief Loyaan Egal and include