Skip to content

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.

A wide range of legislative instruments applicable to AI are considered by the Study, including: (1) international legal instruments, such as the ECHR and EU Charter of Fundamental Rights; (2) AI ethics guidelines, including ones developed by private companies and public-sector organisations; and (3) national AI instruments and strategies. After weighing the advantages and disadvantages of these measures, the Study concludes that no international legal instrument specifically tailored to the challenges posed by AI systems exists, and that there are gaps in the current level of human rights protections. Such gaps include (amongst other factors) the need to ensure:

  • sufficient human control and oversight;
  • the technical robustness of AI applications; and
  • effective transparency and explainability.

To respond to the human rights challenges presented by AI, the Study sets out the principles, rights, and obligations that could act as the main elements of a future legal framework. The proposed framework seeks to translate existing human rights to the context of AI by specifying more concretely what falls under a broader human right; how it could be invoked by those subjected to AI systems; and the requirements that AI developers and deployers should meet to protect such right. The Study identifies nine principles that are essential to respect human rights in the context of AI:

  • Human Dignity: AI deployers should inform individuals that they are interacting with an AI system whenever confusion may arise, and individuals should be granted the right to refuse interaction with an AI system whenever this can adversely impact human dignity.
  • Prevention of Harm to Human Rights, Democracy, and the Rule of Law: AI systems should be developed and used in a sustainable manner, and AI developers and deployers should take adequate measures to minimise any physical or mental harm to individuals, society and the environment.
  • Human Freedom and Human Autonomy: Individuals should have the right to effectively contest and challenge decisions informed or made by an AI system and the right to decide freely to be excluded from AI-enabled manipulation, individualised profiling, and predictions.
  • Non-Discrimination, Gender Equality, Fairness and Diversity: Member States should impose requirements to effectively counter the potential discriminatory effects of AI systems deployed by both the public and private sectors, and to protect individuals from their negative consequences.
  • Principle of Transparency and Explainability of AI Systems: Individuals should have the right to a meaningful explanation of how an AI system functions, what optimisation logic it follows, what type of data it uses, and how it affects one’s interests, whenever it generates legal effects or has similar impacts on individuals’ lives. The explanation should be tailored to the particular context, and should be provided in a manner that is useful and comprehensible for an individual.
  • Data Protection and the Right to Privacy: Member States should take particular measures to effectively protect individuals from AI-driven surveillance, including remote biometric recognition technology and AI-enabled tracking technology, as this is not compatible with the Council of Europe’s standards on human rights, democracy and the rule of law.
  • Accountability and Responsibility: Developers and deployers of AI should identify, document, and report on potential negative impacts of AI systems on human rights, democracy and the rule of law, and put in place adequate mitigation measures to ensure responsibility and accountability for any harm caused. Member States should ensure that public authorities are able to audit AI systems, including those used by private actors.
  • Democracy: Member States should take adequate measures to counter the use or misuse of AI systems for unlawful interference in electoral processes, for personalised political targeting without adequate transparency mechanisms, and more generally for shaping voters’ political behaviours and manipulating public opinion.
  • Rule of Law: Member States should ensure that AI systems used in justice and law enforcement are in line with the essential requirements of the right to a fair trial. They should pay due regard to the need to ensure the quality, explainability, and security of judicial decisions and data, as well as the transparency, impartiality, and fairness of data processing methods.

The Study recommends that the Council of Europe establish a binding legal instrument (such as a convention) establishing the main principles for AI systems, which would provide the basis for relevant national legislation. It also suggests that the Council of Europe develop further binding or non-binding sectoral instruments with detailed requirements that address specific sectoral challenges of AI. The Study recommends that the proposed legal framework should pursue a risk-based approach by targeting specific AI application contexts, and acknowledging that not all AI systems pose an equally high level of risk.

Next Steps

The Study was adopted by CAHAI during its plenary meeting in December 2020. Next, it will be presented to the Committee of Ministers of the Council of Europe, who may instruct CAHAI to begin developing the specific elements of a legal framework for AI. This could include a binding legal instrument, as well as non-binding and sectoral instruments.

CAHAI’s work joins similar international initiatives looking to provide guidance and build a global consensus on the development and regulation of AI, including the OECD member states’ recent adoption of OECD Principles on AI — the first international AI standards agreed on by governments — and the establishment of the Global Partnership on Artificial Intelligence (GPAI) in June 2019. We anticipate further developments in this area in 2021, including the European Commission’s forthcoming proposals for AI legislation.

In particular, the principles and recommendations for further action set out in the Study share similar themes with ongoing EU initiatives on AI regulation, including the EU High-Level Working Group’s Ethics Guidelines for Trustworthy AI and the European Commission’s White Paper on AI. Like the Council of Europe’s Study, these initiatives propose a risk-based approach to regulating AI, centred on upholding fundamental human rights like non-discrimination, and ensuring that AI applications are developed and deployed in a trustworthy, transparent, and explainable manner

Stay tuned for further updates.

* The Council of Europe is an international organization that is distinct from the European Union.  Founded in 1949, the Council of Europe has a mandate to promote and safeguard the human rights enshrined in the European Convention on Human Rights. The organization brings together 47 countries, including all of the 27 EU member states.  Recommendations issued by the Council of Europe are not binding, but EU institutions often build on Council of Europe standards when drawing up legislation.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.