Key Points

  • Mexico’s Supreme Court (“SCJN”) has decided or will decide on the fate of key policies promoted by President López Obrador.
  • Lacking a super majority in Congress to amend the Constitution, López Obrador has seen several of his legislative bills declared unconstitutional, like an overhaul of the electoral system, while others are still pending full review by the SCJN, such as the Electric Power Industry Law.
  • Open confrontation between the President and the SCJN has become more evident this year. A slate of candidates summited early November by the President to fill an open seat in the SCJN heralds closer alignment with Morena—the President’s party—and reflects how the SCJN is central for cementing the future of López Obrador’s self-described “Fourth Transformation of Mexico.”
  • The composition of the SCJN will play a decisive role well beyond the end of the López Obrador administration (September 2024) in areas that are critical for the overall business climate, such as energy, tax policy, antitrust, the role of the armed forces in public security, telecom, cybersecurity and artificial intelligence regulation, among others.

López Obrador and the SCJN

On November 7, 2023, the former President of Mexico’s SCJN, Arturo Zaldívar resigned prematurely, a year before the end of his term and after serving in the Court for 14 years. The day after his resignation, Mr. Zaldívar joined the campaign of López Obrador’s favored candidate to succeed him as president, Claudia Sheinbaum. Mr. Zaldívar’s resignation caused a political uproar and was widely perceived as a move that allows López Obrador to get a new SCJN Minister for a full new term. The Constitution only permits ministers to resign for “serious reasons,” and it is expected that Zaldívar will have a prominent role in a future Morena administration, including that of Attorney General after the two-year cool-off period required by the Constitution.

Out of 11 magistrates on the SCJN, four have entered the bench during López Obrador’s tenure, following Senate confirmation: Juan Luis González Alcántara y Carrancá (12/2018), Yasmín Esquivel Mossa (03/2019), Ana Margarita Ríos Farjat (12/2019), and Loretta Ortiz Ahlf (12/2021). This new vacancy in the Court allows the President to nominate a fifth Supreme Court minister, who will serve for a 15-year term.

The President accepted Zaldívar’s resignation and, on November 15, 2023, sent to the Senate his slate of candidates to replace him. The candidates are all women who currently work in his administration, are members of his Morena party and are aligned to his political ideology and government program. Two of them are also related to important members of the party (one is the sister of the Interior Minister and the other is the sister of the Major of Mexico City).

Continue Reading Recent Developments in Mexico’s Supreme Court

Recently, a bipartisan group of U.S. senators introduced new legislation to address transparency and accountability for artificial intelligence (AI) systems, including those deployed for certain “critical impact” use cases. While many other targeted, bipartisan AI bills have been introduced in both chambers of Congress, this bill appears to be one of the first to propose specific legislative text for broadly regulating AI testing and use across industries. 

The Artificial Intelligence Research, Innovation, and Accountability Act—led by Senate Commerce Committee members John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelly Moore Capito (R-WV), and Ben Ray Luján (D-NM)—would establish new testing standards for high-impact AI systems; mandate reporting from AI companies on the testing, training, use, and benefits of high-impact AI systems; and formalize disclosure requirements for AI-generated content.  The bill would also create additional safeguards for certain “critical-impact” AI systems, including those that involve the collection and processing of biometric data, relate to critical infrastructure, or have criminal-justice applications.

The Thune-Klobuchar bill would also direct the National Institute of Standards and Technology to facilitate standards for capturing and disclosing the chain of development (known as “provenance”) of digital content, which would allow users to understand whether AI was involved in producing particular content and assess content authenticity.

This new legislation was introduced as Congress and the Administration continue their robust focus on AI policymaking. Majority Leader Chuck Schumer (D-NY) and a bipartisan group of colleagues announced their “SAFE Innovation Framework” in June, and have continued to host AI Insight Forums with industry experts and stakeholders.  Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) announced their privacy framework for AI regulation in September. 

Dozens of targeted bipartisan AI bills are also pending in Congress, including more than half a dozen that have progressed through their committees of jurisdiction. And, in October, President Biden released his AI executive order outlining a comprehensive strategy to support the development and deployment of safe and secure AI.

EU advocate general Collins has reiterated that individuals’ right to claim compensation for harm caused by GDPR breaches requires proof of “actual damage suffered” as a result of the breach, and “clear and precise evidence” of such damage – mere hypothetical harms or discomfort are insufficient. The advocate general also found that unauthorised access to data does not amount to “identity theft” as that term is used in the GDPR.

The right for individuals to claim compensation for data breaches has long been a controversial and uncertain aspect of the GDPR – see our previous blogs here, herehere, and here for example.

The present case (C-182/22 and 189/22) arose from a data breach that caused an individual’s personal data, including his name, date of birth, and a copy of his identity card, to be accessed by an unknown third party. Although there was no evidence that the third party had harmed the claimant by using the stolen data for identity fraud or similar purposes, the claimant alleged that the unauthorised access to his data caused him emotional distress and amounted to “identity theft”, therefore entitling him to compensation.

Applying the court’s ruling in the Österreichische Post case (see our blog on that case here), the advocate general noted that GDPR compensation must reflect “actual damage suffered” by the relevant GDPR infringement, and that there must be “clear and precise” evidence of the damage suffered. Merely possible or hypothetical damage, or mere disquiet that a breach has occurred, is insufficient. As a result, the advocate general concluded that the claimant only had a right to compensation if he could prove that he had suffered actual damage and could prove that the damage was caused by a GDPR infringement.  

The advocate general went on to note that unauthorised access to personal data does not by itself amount to “identity theft” – a term used in the GDPR as an example of a harm that individuals should be compensated for. Instead, the term “identity theft” in the GDPR is used interchagably with “identity fraud” – that is, it involves some active attempt to use the data to assume another person’s identity. The fact that an unauthorised party has received access to data may enable that party to commit identity theft or fraud, but it is not of itself identity theft or fraud.

What happens next?

The advocate general’s opinion is influential, but not binding on, the CJEU which will issue a final ruling on the case in the coming months. And this case is only one of a raft of cases currently before the CJEU which are set to examine damages under the GDPR (see for example C-687/21 and C-741/21). The topic of defining non-material damages is also of increasing importance as EU member states continue their transposition of the Representative Actions Directive.

*                             *                             *

Covington’s Data Privacy and Cybersecurity Practice regularly advises on European privacy laws, including data breaches, cyber incidents, and litigation at the European Court of Justice.  If you have any questions about the implications of this ruling for your business, please let us know.

(This blog post was written with the contributions of Alberto Vogel.)

On October 3, 2023, an overwhelming majority of the European Parliament (“Parliament”) adopted its position on the EU Media Freedom Act (the “Act”), introducing a number of amendments to the text of the Act as proposed by the European Commission (the “Commission”).

The Commission’s proposal for a Regulation establishing a common framework for media services in the internal market (European Media Freedom Act) and amending Directive 2010/13/EU, published on September 16, 2022, aims, inter alia, to safeguard media independence and promote media pluralism across the EU, in addition to establishing specific requirements for Very Large Online Platforms (“VLOPs”) as defined under Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (the Digital Services Act).

This blog post summarizes some of the key developments resulting from Parliament’s proposed amendments in relation to: (i) requirements for VLOPs when removing content of media service providers from their platforms (Article 17); and (ii) the rights of media service providers (Article 4).

Takedown obligations for VLOPs (Article 17)

Building on the Commission’s original proposal, the position adopted by Parliament, if enacted into law, would impose a number of obligations on VLOPs when taking down content found to be in violation of the platform’s own terms and conditions. As a general rule, VLOPs will need to ensure that their content moderation systems do not negatively impact media freedom and pluralism. More specifically, VLOPs must:

Continue Reading European Parliament Adopts its Position on the EU Media Freedom Act

Earlier this month, the New York Department of Financial Services (“NYDFS”) announced that it had finalized the Second Amendment to its “first-in-the-nation” cybersecurity regulation, 23 NYCRR Part 500.  This Amendment implements many of the changes that NYDFS originally proposed in prior versions of the Second Amendment released for public comment in November 2022 and June 2023, respectively.  The first version of the Proposed Second Amendment proposed increased cybersecurity governance and board oversight requirements, the expansion of the types of policies and controls companies would be required to implement, the creation of a new class of companies subject to additional requirements, expanded incident reporting requirements, and the introduction of enumerated factors to be considered in enforcement decisions, among others.  The revisions in the second version reflect adjustments rather than substantial changes from the first version.  Compliance periods for the newly finalized requirements in the Second Amendment will be phased over the next two years, as set forth in additional detail below.

The finalized Second Amendment largely adheres to the revisions from the second version of the Proposed Second Amendment but includes a few substantive changes, including those described below:

  • The finalized Amendment removes the previously-proposed requirement that each class A company conduct independent audits of its cybersecurity program “at least annually.”  While the finalized Amendment does require each class A company to conduct such audits, they should occur at a frequency based on its risk assessments.  NYDFS stated that it made this change in response to comments that an annual audit requirement would be overly burdensome and with the understanding that class A companies typically conduct more than one audit annually.  See Section 500.2 (c).
  • The finalized Amendment updates the oversight requirements for the senior governing body of a covered entity with respect to the covered entity’s cybersecurity risk management.  Updates include, among others, a requirement to confirm that the covered entity’s management has allocated sufficient resources to implement and maintain a cybersecurity program.  This requirement was part of the proposed definition of “Chief Information Security Officer.”  NYDFS stated that it moved this requirement to the senior governing bodies in response to comments that CISOs do not typically make enterprise-wide resource allocation decisions, which are instead the responsibility of senior management.  See Section 500.4 (d).
  • The finalized Amendment removes a proposed additional requirement to report certain privileged account compromises to NYDFS.  NYDFS stated that it did so in response to public comments that this proposed requirement “is overbroad and would lead to overreporting.”  However, the finalized Amendment retains previously-proposed changes that will require covered entities to report certain ransomware deployments or extortion payments to NYDFS.  See Section 500.17 (a).
Continue Reading New York Department of Financial Services Finalizes Second Amendment to Cybersecurity Regulation

On October 26, 2023, the European Court of Justice (“CJEU”) decided that the GDPR grants a patient the right to obtain a copy of his or her medical record free of charge (case C-307/22, FT DW).   As a result, the CJEU held that a provision under German law that permitted doctors to ask their patients to pay for the costs associated with providing access to their medical record is contrary to EU law.

A patient seeking to uncover errors in his dentist’s work requested access to his medical records.  The dentist replied that, under German law, access to the patient’s medical records could be conditional on the data subject’s payment of the costs connected with providing the records.The patient claimed that this was inconsistent with the GDPR, which gives data subjects a right to access a copy of their data (Article 15).

The CJEU held that, generally, exercising the right of access under the GDPR should not entail any cost for the data subject and that such cost may be only imposed where the data subject has already received a first copy of his or her data free of charge.  The Court also clarified that the GDPR does not require data subjects to provide reasons for their request, and therefore, the data holder cannot reject an access request on the grounds that the data subject access request is not aimed at verifying GDPR compliance.

Finally, the CJEU reiterated that the data subject must be given a “faithful and intelligible reproduction” of the data (see our blog post here).  This includes sharing a full copy of documents containing the data subject’s personal data – rather than just extracts – if doing so is “essential” for the data subject to understand and verify the accuracy and exhaustiveness of the data processing.

The scope of GDPR’s right of access (see our blog posts here and here) has been heavily litigated both at EU and national level.  At national level, in a surprising decision earlier this year the Belgian Data Protection Authority held that it would be excessive to ask an employer to search its email servers for all emails concerning a former employee.  According to the Authority, this would constitute a “disproportionate effort” for the former employer as, among other things, the requestor had been an employee for eight years and, for some period of time, the email address the requestor used was also used by other employees.  In addition, the requestor had not provided any parameters that could aid the former employer in its search through the email servers. 

*                             *                             *

Covington’s Data Privacy and Cybersecurity Practice regularly advises on data subject access requests, and on privacy investigations and disputes including at the CJEU.  If you have any questions about the interaction between data protection and local laws we are happy to assist.

(This blog post was written with the contributions of Alberto Vogel and Diane Valat.)

The Government of Brazil has initiated a public consultation offering companies, business associations or civil society organizations an opportunity to comment on the country’s proposed new foreign trade strategy.

The consultation was initiated by the Foreign Trade Board (CAMEX), Brazil’s federal government interagency mechanism to coordinate the country’s trade policy.  CAMEX is part of the Executive Office of the President and chaired by the Vice-President.  It was recently reorganized and strengthened by Decree 11428 of March 2, 2023.

The public consultation is open until December 6, 2023.  Its scope includes measures in five areas:

  1. Export competitiveness;
  2. Economic integration;
  3. Trade facilitation and reduction of bureaucracy;
  4. Trade and sustainability; and
  5. Remedies for illegal and unfair trade practices.

Covington’s Public Policy team is ready to support clients interested in submitting comments or suggestions.

On October 12, 2023 the Italian Data Protection Authority (“Garante”) published guidance on the use of AI in healthcare services (“Guidance”).  The document builds on principles enshrined in the GPDR, national and EU case-law.  Although the Guidance focuses on Italian national healthcare services, it offers considerations relevant to the use of AI in the healthcare space more broadly.

We provide below an overview of key takeaways.

Lawfulness of processing

The “substantial public interest” derogation for the processing of health data (Article 9(2)(g) of GDPR) must be grounded in EU or in specific provisions of national law.  Moreover, when relying on that ground, profiling and automated decision making may only take place if expressly provided by law.  

Accountability, definition of roles and privacy by design and by default

The Garante stresses the importance of the principles of privacy by design and by default, connected with accountability.  Controllers should carefully consider the design of systems and appropriate data protection safeguards throughout the entire AI cycle.  Additionally, the roles of each stakeholder involved should be determined appropriately.

Data protection impact assessment (“DPIA”)

The Garante unequivocally states that processing of health data to carry out health services at the national level through the use of AI, resulting in a systematic and large-scale processing, qualifies as “high risk”, and therefore requires conducting a DPIA.  Among other things, the DPIA should take into account specific risks, such as discrimination, linked to the use of algorithms to identify trends and draw conclusions from certain datasets, and to take automated decisions based on profiling.  The DPIA should also carefully outline the role of human intervention in those decision-making processes.

Key principles for performing public interest tasks through AI tools and algorithms

The Garante recalls the application of three key principles, established by recent national case law, when processing personal data by means of AI tools and algorithms in the public interest, namely:

  • Transparency: data subjects have a right to know about the existence of decision-making based on automated processing, and to be informed about the logic involved;
  • Human intervention: human intervention capable of controlling, confirming, or refuting an automated decision should be guaranteed; and
  • Non-discrimination: controllers should ensure that they use reliable AI systems, and implement appropriate measures to reduce opaqueness and errors, and periodically review the systems’ effectiveness, given the potential discriminatory effects that processing of health data may yield.  

Quality, integrity and confidentiality of data

Ensuring the accuracy and quality of data processed is paramount in this context, not least to ensure adequate and safe therapeutic assistance.  Controllers should therefore evaluate carefully the underlying risks and take appropriate measures to address them.

Moreover, the authority highlights the risks connected with potential biases produced in the development and use of the analyses, and/or the volume of data used, which may result in negative impact on, or discriminatory effects for individuals.  Controllers should mitigate risks by taking the following measures: (1) clarify the algorithmic logic used by the AI to generate data and services; (2) keep a record of checks performed to avoid biases and of the implemented measures; and (3) monitor risks.

Transparency and fairness

To ensure transparency and fairness in automated decision-making processes, and in the particular context of national healthcare services, the Garante recommends implementing the following measures:

  • ensure clarity, predictability and transparency of the legal basis, including by conducting dedicated information campaigns and ensure effective methods for data subjects to exercise their rights;
  • consult stakeholders and data subjects in the context of conducting a DPIA, and publish at least an excerpt of the DPIA;
  • inform data subjects in clear, concise and comprehensible terms, not only with regards to the elements prescribed by Articles 13 and 14 of GDPR, but also about (i) whether the processing is performed in the algorithm’s training phase, or in its subsequent application, and describing the logic and characteristics of the processing; (ii) whether any obligations and responsibilities are imposed on healthcare professionals using healthcare systems based on AI; and (iii) the advantages, with regards to diagnostics and therapy, resulting from the use of such technology;
  • when used for therapeutic purposes, ensure that data processing based on AI is only executed on the basis of an express request by the healthcare professional, and not automatically; and
  • regulate the healthcare practitioner’s professional responsibility.

Human supervision

The Garante highlights the potential risks for individuals’ rights and freedoms of exclusively automated decision-making, and endorses effective human intervention, through highly skilled supervision.  The authority recommends ensuring a central role for human supervision in the training phase of the algorithm, and in particular, of the healthcare professional.

Principles relating to human dignity and personal identity

The Guidance concludes with some general considerations on the role of ethics in the future development of AI systems in the health space, in order to safeguard human dignity and personal identity, especially with regards to vulnerable subjects.  The Garante recommends to carefully select and engage reliable suppliers of AI services, by verifying preliminarily documentation, such as an AI impact assessment (for more information on AI impact assessments, see our previous blog post here).

***

Covington’s Data Privacy and Cybersecurity Team regularly advises clients on the laws surrounding AI and continues to monitor developments in the field of AI.  If you have any questions about AI in the healthcare space, our team and Covington’s Life Sciences Team would be happy to assist.

As early as this week, the Federal Senate of the Brazilian National Congress may vote a potentially historic tax reform, revamping a tax system that has been in place since the 1960s and has increased in complexity, inefficiency, and compliance cost over the years.

The reform is a draft constitutional amendment (PEC) that requires a favorable vote by at least three-fifths of the members of each chamber of Congress in two rounds of voting (308 in the House of Deputies and 49 in the Senate).

The House approved the amendment on July 7, 2023, with 382 and 370 votes in the first and second rounds, respectively.  The Senate must now vote on the amendment.

Pressure Politics in the Senate

The reform is largely focused on consumption taxes, creating a full-fledged value-added tax (VAT) for Brazil, although it also includes changes in property taxes.  Its outline, political economy, and approval process was discussed in this blog post.

The Senate rapporteur’s report includes key changes to the House-approved draft text.

The Senate is under pressure to establish a tax ceiling for the VAT.  President Luiz Inácio Lula da Silva’s administration is pursuing a strategy to increase government revenue in order to achieve the country’s ambitious new fiscal framework goals.  Private sector groups are concerned the administration might push for a VAT rate higher than the existing tax level, increasing the burden on companies.  They are also concerned about the scope of the proposed Selective Tax on goods and services with negative health and environmental externalities.  The opposition in Congress in echoing these fears.

Continue Reading Key Vote on Tax Reform Expected in Brazil’s Senate