Photo of Dan Cooper

Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his "level of expertise is second to none, but it's also equally paired with a keen understanding of our business and direction." It was noted that "he is very good at calibrating and helping to gauge risk."

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP's European Advisory Board, Privacy International and the European security agency, ENISA.

Kenya has released its first National Artificial Intelligence Strategy (2025–2030), a landmark document on the continent that sets out a government-led vision for ethical, inclusive, and innovation-driven AI adoption. Framed as a foundational step in the country’s digital transformation agenda, the strategy articulates policy ambitions that will be of interest to global companies developing, deploying, or investing in AI technologies across Africa.

While the strategy is explicitly domestic in focus, its framing—and the architecture of its governance, infrastructure, and data pillars—reflects a broader trend, i.e., the localization of global AI governance norms in high-growth, emerging markets.

What the Strategy Means for Global Technology Governance

The strategy touches on several themes that intersect with enterprise risk, product development, and regulatory foresight for multinationals:

  • Data governance and sovereignty: Kenya signals a strong intent to develop AI within national parameters, grounded in local data ecosystems. The strategy explicitly references data privacy, cybersecurity, and ethics as core enablers of the AI ecosystem. For global companies with cloud-based models or cross-border data transfer frameworks, these developments may signal localization pressures or evolving consent standards.
  • Sector-specific use cases: Healthcare, agriculture, financial services, and public administration are named as strategic AI priorities. Companies operating in the life sciences, health tech, or diagnostics space should watch closely for how regulatory authorities may interpret and apply ethical or risk-based AI guidelines—especially where AI is used in clinical decision-making, diagnostics, or personalized medicine.
  • Public-private AI infrastructure development: The strategy envisages expanded digital infrastructure, data centers, and cloud resources, as well as national research hubs. This may create commercial opportunities—but could also trigger localization requirements or procurement-related restrictions, particularly for telecommunications and hyperscale cloud providers.
  • Future legal frameworks: The current strategy is not itself a binding legal instrument, but it points to future policy development—especially around governance, regulatory oversight, and risk classification of AI systems. Teams advising on AI risk, litigation exposure, and AI-assisted products (including generative tools) will want to track the next wave of draft legislation and implementation guidance.

Continue Reading Kenya’s AI Strategy 2025–2030: Signals for Global Companies Operating in Africa

On March 20, 2025, the Court of Justice of the European Union (“CJEU”) ruled on the fairness, under EU consumer protection law, of a contractual clause allocating a percentage of an athlete’s income to a professional services provider (Case C‑365/23 [Arce]).  This ruling sets an important precedent and strengthens the protection afforded by consumer protection law to minors who enter into professional service contracts, whether in sport or elsewhere.

Background

The case was referred to the CJEU by a Latvian court.  It concerns a contract whereby a company undertook to provide career support services – including coaching, training, sports medicine, sports psychology, career guidance, club contracts, marketing, legal services, and accounting – to a basketball player, who was a minor at the time and therefore represented by his parents.  In exchange for the company’s services, the athlete agreed to pay 10% of any net income (plus VAT) he would receive over a period of 15 years from the signing of the contract.  At the time of signing the contract, the athlete was not a professional.  Some years later, however, he became a professional athlete.  When the athlete refused to pay the percentage to the company, the company sued him to enforce the contract.  The Latvian courts asked the CJEU, whether it could assess the fairness of this long-term financial commitment under the Latvian legislation implementing Directive 93/13/EEC on unfair terms in consumer contracts (“UCTD”).

Application of the Unfair Contract Terms Directive

Under the UCTD, a contractual clause in a business-to-consumer contract (not negotiated by the consumer) is unfair if it causes a significant imbalance in the parties’ rights and obligations under the contract, to the detriment of the consumer.  The CJEU ruled that the UCTD, as transposed into Latvian law, applies to the contract between the professional services provider and the athelete because the athlete was not yet engaged in professional sport at the time the contract was signed.  The status of “consumer” must be assessed at the time of the conclusion of the contract.  Consequently, the athlete was a “consumer” within the meaning of the UCTD.  The CJEU ruled that the UCTD applies even if the individual later embarks on a professional career.Continue Reading CJEU Rules on Fairness of Remuneration Clause in Sports Contract

On March 21, 2025, the European Commission announced that the Consumer Protection Cooperation Network (“CPC-N”) had initiated enforcement proceedings against an online gaming company, for allegedly violating EU consumer protection laws and engaging in practices that could pose a particular risk to children.  The gaming company now has one month

Continue Reading Consumer Watchdogs Turn Their Attention to the Online Gaming Industry

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.

Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:

  1. Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
  2. Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
  3. Transparency reporting on advanced AI systems – includes questions regarding, among others,  reports and technical documentation and transparency practices.
  4. Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
  5. Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
  6. Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
  7. Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.

Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices

On 16 January 2025, the European Data Protection Board (“EDPB”) published a position paper, as it had announced last year, on the “interplay between data protection and competition law” (“Position Paper”).

In this blogpost, we outline the EDPB’s position on cooperation between EU data protection authorities (“DPAs”) and competition authorities (“CAs”) in the context of certain key issues at the intersection of data protection and competition law.

Key takeaways

  1. In the interest of coherent regulatory outcomes, the EDPB advocates for increased cooperation between DPAs and CAs.
  2. The Position Paper offers practical suggestions to that end, such as fostering closer personal relationships, mutual understanding, and a shared sense of purpose, as well as more structured mechanisms for regulatory cooperation.
  3. The EDPB is mindful of the Digital Markets Act’s (“DMA”) significance in addressing data protection and competition law risks.

Summary of the Position Paper

The EDPB first outlines certain overlaps between data protection and competition law (e.g., data serving as a parameter of competition). The EDPB argues that as both legal regimes seek to protect individuals and their choices, albeit in different ways, “strengthening the link” between data protection and competition law can “contribute to the protection of individuals and the well-being of consumers”.

The EDPB takes the view that closer cooperation between DPAs and CAs would therefore benefit individuals (and businesses) by improving the consistency and effectiveness of regulatory actions. Moreover, the EDPB emphasises that, based on the EU principle of “sincere cooperation” between regulatory authorities and pursuant to the European Court of Justice’s ruling in Meta v Bundeskartellamt (2023), cooperation between DPAs and CAs would be “in some cases, mandatory and not optional”.Continue Reading EDPB highlights the importance of cooperation between data protection and competition authorities

The Information Regulator recently published its Guidance Note on Direct Marketing (“Guidance Note”), providing clarity on how personal information can be lawfully processed under the Protection of Personal Information Act (“POPIA”). The Guidance Note offers actionable steps for organizations to align their marketing practices with these principles, fostering responsible marketing that complies with both the letter and spirit of the law.

In this blog, we briefly examine POPIA’s rules on direct marketing, and some of the key highlights from the Guidance Note.

How Direct Marketing is Regulated under POPIA

POPIA regulates direct marketing by establishing strict conditions for the lawful processing of personal information. It requires “responsible parties” (more commonly known as ‘controllers’) to ensure that personal data is collected and used transparently, fairly, and only for a specific, legitimate purpose.

For direct marketing:

  • Consent is the default requirement for unsolicited electronic communications (e.g., emails, SMSs, and automated calls). Section 69 of POPIA explicitly prohibits such communications unless the data subject has given prior consent or is an existing customer under specific conditions.
  • Legitimate interests may only serve as a justification for non-electronic direct marketing (e.g., postal mail or in-person promotions) under section 11, provided the responsible party conducts a legitimate interest assessment and complies with all conditions for lawful processing.

These rules emphasize data subjects’ control over their personal information, highlighting the importance of consent and the right to object.Continue Reading Long-Awaited POPIA Guidance on Direct Marketing Published by South Africa’s Information Regulator

Now that the EU Artificial Intelligence Act (“AI Act”) has entered into force, the EU institutions are turning their attention to the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the so-called “AI Liability Directive”).  Although the EU Parliament and the Council informally agreed on the text of the proposal in December 2023 (see our previous blog posts here and here), the text of the proposal is expected to change based on a complementary impact assessment published by the European Parliamentary Research Service on September 19.

Brief Overview of the AI Liability Directive

The AI Liability Directive was proposed to establish harmonised rules in fault-based claims (e.g., negligence).  These were to cover the disclosure of evidence on high-risk artificial intelligence (“AI”) systems and the burden of proof including, in certain circumstances, a rebuttable presumption of causation between the fault of the defendant (i.e., the provider or deployer of an AI system) and the output produced by the AI system or the failure of the AI system to produce an output.

Potential Changes to the AI Liability Directive

In July, news reports leaked a slightly amended version of the European Commission’s AI Liability Directive proposal to align the wording with the adopted AI Act (Council document ST 12523 2024 INIT).  The amendments reflect the difference in numbering between the proposed AI Act and the enacted version.

Over the summer, the EU Parliamentary Research Service carried out a complementary impact assessment to evaluate whether the AI Liability Directive should remain on the EU’s list of priorities.  In particular, the new assessment was to determine whether the AI Liability Directive is still needed in light of the proposal for a new Product Liability Directive (see our blog post here).Continue Reading The EU Considers Changing the EU AI Liability Directive into a Software Liability Regulation

On September 12, 2024, the European Commission announced that it will launch a public consultation on additional standard contractual clauses for international transfers of personal data to non-EU controllers and processors that are subject to the EU GDPR extra-territorially (“Additional SCCs”), something that has been promised by the European Commission

Continue Reading EU Commission Announces New SCCs for International Transfers to Non-EU Controllers and Processors Subject to the GDPR

On August 23, 2024, the Brazilian Data Protection Authority (“ANPD”) published Resolution 19/2024, approving the Regulation on international data transfers and the content of standard contractual clauses (the “Regulation”).  The Regulation implements the international data transfer framework under the Brazilian General Data Protection Law (“LGPD”).

Under the LGPD, international data transfers from Brazil to a third country are permitted if: (i) the ANPD recognizes the third country as providing adequate protection for personal data; (ii) the data exporter and data importer enter into standard contractual clauses (“SCCs”), binding corporate rules, or special contractual clauses; or (iii) one of the specific cases listed in the LGPD applies (e.g., the transfer is necessary to protect the life of the data subject, the data subject consents to the transfer, or the ANPD authorizes the transfer).  The Regulation relates to the data transfer instruments mentioned in (i) and (ii).

Standard Contractual Clauses
The Regulation approves and publishes SCCs for the transfer of personal data outside of Brazil without ANPD’s authorization.  The SCCs cover both controller-to-controller and controller-to-processor international data transfers.  Like the EU SCCs, they are contracts signed between the data exporter (in Brazil) and the data importer (in a third country).  The parties may not modify them.  The ANPD may allow the transfer of personal data outside of Brazil on the basis of “equivalent SCCs” adopted by third countries, provided that they are compatible with the LGPD.  The ANPD has not (yet) indicated that it would recognize the EU SCCs as equivalent.

Brazilian controllers that use contractual clauses to transfer personal data internationally must replace those contracts with the newly published SCCs by August 22, 2025.Continue Reading Brazil Issues New Regulation on International Data Transfers

On May 30, 2024, the European Court of Justice (“CJEU”) ruled that any button a consumer uses to order a service online must clearly indicate that the consumer commits to pay the price for the relevant service by affirmatively clicking on it. (Conny Case C-400/22) At issue was whether

Continue Reading CJEU Clarifies Online “Order Buttons” Must Indicate that the Consumer is Assuming an Obligation to Pay