Life Sciences & Digital Health

EU justice
European Union Law Scales of Justice

In a precedent decision, on 13 November 2024, the EU General Court annulled significant parts of a Commission Regulation, which sought to restrict or place under scrutiny the addition of certain botanicals containing hydroxyanthracene derivatives (“HADs”) to foods.  The Court held that the Commission had exceeded its powers by seeking to regulate botanical “preparations.”  Moreover, the Commission, in relying on the scientific opinion of the European Food Safety Authority (“EFSA”), had failed to demonstrate that the relevant substances would be ingested in amounts greatly exceeding those consumed from a normal diet or otherwise represented a potential risk to consumers. 

1. Background

Regulation (EC) 1925/2006 governs the addition of vitamins and minerals and of certain other substances to food (the “Fortification Regulation”).  Article 8 permits the Commission on its own initiative, or on the basis of information provided by Member States, to prohibit, restrict or place under scrutiny “substances” and “ingredients containing a substance”, which are “added to foods or used in the manufacture of foods under conditions that would result in the ingestion of amounts of this substance greatly exceeding those reasonably expected to be ingested under normal conditions of consumption of a balanced and varied diet and/or would otherwise represent a potential risk to consumers.

In 2016, the Commission, relying on Article 8, requested EFSA to provide a scientific opinion on the safety of HADs and preparations containing HADs.  In November 2017, EFSA adopted its scientific opinion “Safety of hydroxyanthracene derivatives for use in foods” (“the EFSA Opinion”) in which it concluded as follows:Continue Reading EU Court Overturns EU-wide Botanical Food Ban

October 12, 2024, marks the 10-year anniversary of entry into force of the Nagoya Protocol on Access to Genetic Resources and Fair Benefit-Sharing from their Utilization (“ABS”). This additional treaty to the Convention on Biological Diversity (“CBD”) has now been ratified by 142 countries. Over the past decade, Nagoya Protocol has resulted in the mushrooming of more than one-hundred thirty (130+) national ABS laws around the globe. All this time the Covington life sciences team has stood by its pharmaceutical, food and biotech clients to navigate this ever-more challenging Harlequin’s costume that is the global ABS legal regime. 

In this Client Alert we share lessons learned from our 10+ years of experience on ABS in the life sciences sector.[1] As an anniversary edition, this document is a long read. For ease of navigation, we have structured it as a Q&A. 

We first recall the basics of ABS. Then we cover key questions from clients such as e.g. compliance best practices and enforcement trends. Finally, we look to challenges in the near-future, focusing on emerging ABS regimes such as the global mechanism on benefit-sharing from Digital Sequence Information (“DSI”), the genetic resource disclosure requirement when filing patents under a new World Intellectual Property Organization (“WIPO”) treaty, the new “pathogen” ABS provisions of the World Health Organization (“WHO”) Pandemic Treaty, the High Seas Treaty on marine genetic resources, and last but not least, the new corporate due diligence obligations under the EU’s Corporate Sustainability Due Diligence Directive (“CS3D”).

If you have any questions or would like a meeting concerning the material discussed in this Client Alert, please contact our partner Bart Van Vooren at bvanvooren@cov.com.

The ABC of ABS

1. What is the purpose of the Nagoya Protocol?

The Convention on Biological Diversity of 1992 recognizes the sovereignty of countries over biological resources within their jurisdiction. The CBD has three main objectives: (1) the conservation of biodiversity, (2) its sustainable use, and (3) “the fair and equitable sharing of benefits from the arising from the utilization of genetic resources.” Although there are 196 Parties to the CBD, by 2014 very few countries had implemented rules on ABS. The Nagoya Protocol was therefore negotiated as a supplemental treaty to achieve the third objective of the CBD. It does so by empowering countries to impose prior authorization (Access) and payment requirements (Benefit-Sharing) on companies that commercialize products or processes that utilize biological materials. This supposedly creates a financial resources and incentive for countries to protect biodiversity.Continue Reading The Nagoya Protocol at Its 10th Anniversary: Lessons Learned and New Challenges from ‘Access and Benefit-Sharing’

On 1 July 2024, Germany has enacted stricter requirements for the processing of health data when using cloud-computing services. The new Section 393 SGB V aims to establish a uniform standard for the use of cloud-computing services in the statutory healthcare system which covers around 90% of the German population. In this blog post, we describe the specific new requirements for the processing of health and social data using cloud-computing. We will also discuss whether the new rules may impact medical research and other projects that utilize cloud-computing for processing health data.

1. Scope and Background of Sec. 393 SGB V

The new Section 393 SGB V (Social Security Code – Book V) has been enacted with the recent “Digital Act” (see our earlier blog on the Digital Act). The title of Section 393 SGB V is “Cloud-Use in the Healthcare System“. Hence, it aims to impose specific requirements for healthcare service providers, statutory health insurances and their contract data processors when they process health data and social data using cloud-computing services. According to the German legislator, the provision aims at enabling the secure use of cloud services as a “modern, generally widespread technology in the healthcare sector and to create minimum technical standards for the use of IT systems based on cloud-computing”.

The new requirements apply to data processing using cloud-computing irrespective of whether the cloud-computing is offered by an external vendor or utilizes a tool that the healthcare providers or health insurance has developed on their own.

The term “cloud-computing service” is defined in the law as “a digital service that enables on-demand management and comprehensive remote access to a scalable and elastic pool of shared computing resources, even if these resources are distributed across multiple locations” (Section 384 Sentence 1 No. 5 SGB V). This reflects the corresponding definition of cloud-computing in Article 6 (30) of the NIS2-Directive (EU) 2022/2555 on cybersecurity measures. Services that fall under this definition include, inter alia, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).Continue Reading Germany enacts stricter requirements for the processing of Health Data using Cloud-Computing – with potential side effects for Medical Research with Pharmaceuticals and Medical Devices

This update focuses on how growing quantum sector investment in the UK and US is leading to the development and commercialization of quantum computing technologies with the potential to revolutionize and disrupt key sectors.  This is a fast-growing area that is seeing significant levels of public and private investment activity.  We take a look at how approaches differ in the UK and US, and discuss how a concerted, international effort is needed both to realize the full potential of quantum technologies and to mitigate new risks that may arise as the technology matures.

Quantum Computing

Quantum computing uses quantum mechanics principles to solve certain complex mathematical problems faster than classical computers.  Whilst classical computers use binary “bits” to perform calculations, quantum computers use quantum bits (“qubits”).  The value of a bit can only be zero or one, whereas a qubit can exist as zero, one, or a combination of both states (a phenomenon known as superposition) allowing quantum computers to solve certain problems exponentially faster than classical computers. 

The applications of quantum technologies are wide-ranging and quantum computing has the potential to revolutionize many sectors, including life-sciences, climate and weather modelling, financial portfolio management and artificial intelligence (“AI”).  However, advances in quantum computing may also lead to some risks, the most significant being to data protection.  Hackers could exploit the ability of quantum computing to solve complex mathematical problems at high speeds to break currently used cryptography methods and access personal and sensitive data. 

This is a rapidly developing area that governments are only just turning their attention to.  Governments are focusing not just on “quantum-readiness” and countering the emerging threats that quantum computing will present in the hands of bad actors (the US, for instance, is planning the migration of sensitive data to post-quantum encryption), but also on ramping up investment and growth in quantum technologies. Continue Reading Quantum Computing: Developments in the UK and US

On December 5, 2023, the Spanish presidency of the Council of the EU issued a declaration to strengthen collaboration with Member States and the European Commission to develop a leading quantum technology ecosystem in Europe.

The declaration acknowledges the revolutionary potential of quantum computing, which uses quantum mechanics principles and

Continue Reading Quantum Computing: Action in the EU and Potential Impacts

This quarterly update summarizes key legislative and regulatory developments in the second quarter of 2023 related to key technologies and related topics, including Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), data privacy and cybersecurity, and online teen safety.

Artificial Intelligence

AI continued to be an area of significant interest of both lawmakers and regulators throughout the second quarter of 2023.  Members of Congress continue to grapple with ways to address risks posed by AI and have held hearings, made public statements, and introduced legislation to regulate AI.  Notably, Senator Chuck Schumer (D-NY) revealed his “SAFE Innovation framework” for AI legislation.  The framework reflects five principles for AI – security, accountability, foundations, explainability, and innovation – and is summarized here.  There were also a number of AI legislative proposals introduced this quarter.  Some proposals, like the National AI Commission Act (H.R. 4223) and Digital Platform Commission Act (S. 1671), propose the creation of an agency or commission to review and regulate AI tools and systems.  Other proposals focus on mandating disclosures of AI systems.  For example, the AI Disclosure Act of 2023 (H.R. 3831) would require generative AI systems to include a specific disclaimer on any outputs generated, and the REAL Political Advertisements Act (S. 1596) would require political advertisements to include a statement within the contents of the advertisement if generative AI was used to generate any image or video footage.  Additionally, Congress convened hearings to explore AI regulation this quarter, including a Senate Judiciary Committee Hearing in May titled “Oversight of A.I.: Rules for Artificial Intelligence.”

There also were several federal Executive Branch and regulatory developments focused on AI in the second quarter of 2023, including, for example:

  • White House:  The White House issued a number of updates on AI this quarter, including the Office of Science and Technology Policy’s strategic plan focused on federal AI research and development, discussed in greater detail here.  The White House also requested comments on the use of automated tools in the workplace, including a request for feedback on tools to surveil, monitor, evaluate, and manage workers, described here.
  • CFPB:  The Consumer Financial Protection Bureau (“CFPB”) issued a spotlight on the adoption and use of chatbots by financial institutions.
  • FTC:  The Federal Trade Commission (“FTC”) continued to issue guidance on AI, such as guidance expressing the FTC’s view that dark patterns extend to AI, that generative AI poses competition concerns, and that tools claiming to spot AI-generated content must make accurate disclosures of their abilities and limitations.
  • HHS Office of National Coordinator for Health IT:  This quarter, the Department of Health and Human Services (“HHS”) released a proposed rule related to certified health IT that enables or interfaces with “predictive decision support interventions” (“DSIs”) that incorporate AI and machine learning technologies.  The proposed rule would require the disclosure of certain information about predictive DSIs to enable users to evaluate DSI quality and whether and how to rely on the DSI recommendations, including a description of the development and validation of the DSI.  Developers of certified health IT would also be required to implement risk management practices for predictive DSIs and make summary information about these practices publicly available.

Continue Reading U.S. Tech Legislative & Regulatory Update – Second Quarter 2023

Today, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal to develop legislation to promote and regulate artificial intelligence. In a speech at the Center for Strategic & International Studies, Leader Schumer remarked: “[W]ith AI, we cannot be ostriches sticking our heads in the sand. The question

Continue Reading Senator Schumer Unveils New Two-Part Proposal to Regulate AI

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code

Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On May 23, 2023, the White House announced that it took the following steps to further advance responsible Artificial Intelligence (“AI”) practices in the U.S.:

  • the Office of Science and Technology Policy (“OSTP”) released an updated strategic plan that focuses on federal investments in AI research and development (“R&D”);
  • OSTP issued a new request for information (“RFI”) on critical AI issues; and
  • the Department of Education issued a new report on risks and opportunities related to AI in education.

These announcements build on other recent actions by the Administration in connection with AI, such as the announcement earlier this month regarding new National Science Foundation funding for AI research institutions and meetings with AI providers.

This post briefly summarizes the actions taken in the White House’s most recent announcement.

Updated OSTP Strategic Plan

The updated OSTP strategic plan defines major research challenges in AI to coordinate and focus federal R&D investments.  The plan aims to ensure continued U.S. leadership in the development and use of trustworthy AI systems, prepare the current and future U.S. workforce for the integration of AI systems across all sectors, and coordinate ongoing AI activities across agencies.

The plan as updated identifies nine strategies:Continue Reading White House Announces New Efforts to Advance Responsible AI Practices

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

The statement applies to “automated systems,” which are broadly defined “to mean software and algorithmic processes” beyond AI.  Although the statement notes the significant benefits that can flow from the use of automated systems, it also cautions against unlawful discrimination that may result from that use. 

The statement starts by summarizing the existing legal authorities that apply to automated systems and each agency’s guidance and statements related to AI.  Helpfully, the statement serves to aggregate links to key AI-related guidance documents from each agency, providing a one-stop-shop for important AI-related publications for all four entities.  For example, the statement summarizes the EEOC’s remit in enforcing federal laws that make it unlawful to discriminate against an applicant or employee and the EEOC’s enforcement activities related to AI, and includes a link to a technical assistance document.  Similarly, the report outlines the FTC’s reports and guidance on AI, and includes multiple links to FTC AI-related documents.

After providing an overview of each agency’s position and links to key documents, the statement then summarizes the following sources of potential discrimination and bias, which could indicate the regulatory and enforcement priorities of these agencies.

  • Data and Datasets:  The statement notes that outcomes generated by automated systems can be skewed by unrepresentative or imbalanced data sets.  The statement says that flawed data sets, along with correlation between data and protected classes, can lead to discriminatory outcomes.
  • Model Opacity and Access:  The statement observes that some automated systems are “black boxes,” meaning that the internal workings of automated systems are not always transparent to people, and thus difficult to oversee.
  • Design and Use:  The statement also notes that flawed assumptions about users may play a role in unfair or biased outcomes.

We will continue to monitor these and related developments across our blogs.Continue Reading DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI