Attorneys General in Oregon and Connecticut issued guidance over the holiday interpreting their authority under their state comprehensive privacy statutes and related authorities.  Specifically, the Oregon Attorney General’s guidance focuses on laws relevant for artificial intelligence (“AI”), and the Connecticut Attorney General’s guidance focuses on opt-out preference signals that go into effect on January 1, 2025 in the state.

Oregon Guidance on AI Systems

On December 24, Oregon Attorney General Ellen Rosenblum issued guidance, “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence,” which underscores that the state’s Unlawful Trade Practices Act (“Oregon UTPA”), Consumer Privacy Act (“OCPA”), Equality Act, and other legal authorities apply to AI.  After noting the opportunities for Oregon’s economy – from streamlining tasks to delivering personalized services – the guidance states that AI can involve concerns around privacy, discrimination, and accountability. 

In particular, the guidance discusses how the Oregon UTPA and OCPA apply to the development and use of AI.  First, and with respect to the Oregon UTPA, Attorney General Rosenblum states that the “marketing, sale, or use” of AI systems are not exempt from the Oregon UTPA.  The guidance then provides several examples of activities related to AI systems that could implicate the Oregon UTPA, including, for example, that a business could violate the Oregon UTPA if it fails to disclose a material defect or material nonconformity in an AI product.  As another example, the guidance states that a business could violate the Oregon UTPA if it misrepresents the characteristics, uses, benefits, or qualities of AI system.

Additionally, the guidance also remarks on the intersection between the development and use of AI systems and the OCPA, and Attorney General Rosenblum remarks on two notable topics:

  • Disclosures of Personal Data for Model Training:  Attorney General Rosenblum states that developers that use personal data to train AI systems “must clearly disclose this in an accessible and clear privacy notice.”  Additionally, the guidance states that suppliers and developers cannot retroactively or passively alter privacy notices and must obtain affirmative consent for any new or secondary uses of that data.
  • Sensitive Data for Training:  The guidance also states that the use of sensitive data to develop or train AI models requires consent.
  • DPIAs:  Additionally, the guidance states that “feeding consumer data into AI models and processing it in connection with these models” presents “heightened risks to consumers” that require a data protection assessment.

Connecticut Guidance on OOPS Signals On December 30, Connecticut Attorney General William Tong issued a press release that the requirement to honor global opt out preference signals (“OOPS”) sent by consumers enters into effect January 1, 2025.  Under the Connecticut privacy statute, covered entities must honor signals that communicate a consumer’s request to opt-out of the sale of their personal data or targeted advertising.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Libbie Canter Libbie Canter

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports…

Libbie Canter represents a wide variety of multinational companies on privacy, cyber security, and technology transaction issues, including helping clients with their most complex privacy challenges and the development of governance frameworks and processes to comply with global privacy laws. She routinely supports clients on their efforts to launch new products and services involving emerging technologies, and she has assisted dozens of clients with their efforts to prepare for and comply with federal and state privacy laws, including the California Consumer Privacy Act and California Privacy Rights Act.

Libbie represents clients across industries, but she also has deep expertise in advising clients in highly-regulated sectors, including financial services and digital health companies. She counsels these companies — and their technology and advertising partners — on how to address legacy regulatory issues and the cutting edge issues that have emerged with industry innovations and data collaborations.

As part of her practice, she also regularly represents clients in strategic transactions involving personal data and cybersecurity risk. She advises companies from all sectors on compliance with laws governing the handling of health-related data. Libbie is recognized as an Up and Coming lawyer in Chambers USA, Privacy & Data Security: Healthcare. Chambers USA notes, Libbie is “incredibly sharp and really thorough. She can do the nitty-gritty, in-the-weeds legal work incredibly well but she also can think of a bigger-picture business context and help to think through practical solutions.”

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.