The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.
A wide range of legislative instruments applicable to AI are considered by the Study, including: (1) international legal instruments, such as the ECHR and EU Charter of Fundamental Rights; (2) AI ethics guidelines, including ones developed by private companies and public-sector organisations; and (3) national AI instruments and strategies. After weighing the advantages and disadvantages of these measures, the Study concludes that no international legal instrument specifically tailored to the challenges posed by AI systems exists, and that there are gaps in the current level of human rights protections. Such gaps include (amongst other factors) the need to ensure:
- sufficient human control and oversight;
- the technical robustness of AI applications; and
- effective transparency and explainability.
To respond to the human rights challenges presented by AI, the Study sets out the principles, rights, and obligations that could act as the main elements of a future legal framework. The proposed framework seeks to translate existing human rights to the context of AI by specifying more concretely what falls under a broader human right; how it could be invoked by those subjected to AI systems; and the requirements that AI developers and deployers should meet to protect such right. The Study identifies nine principles that are essential to respect human rights in the context of AI:
- Human Dignity: AI deployers should inform individuals that they are interacting with an AI system whenever confusion may arise, and individuals should be granted the right to refuse interaction with an AI system whenever this can adversely impact human dignity.
- Prevention of Harm to Human Rights, Democracy, and the Rule of Law: AI systems should be developed and used in a sustainable manner, and AI developers and deployers should take adequate measures to minimise any physical or mental harm to individuals, society and the environment.
- Human Freedom and Human Autonomy: Individuals should have the right to effectively contest and challenge decisions informed or made by an AI system and the right to decide freely to be excluded from AI-enabled manipulation, individualised profiling, and predictions.
- Non-Discrimination, Gender Equality, Fairness and Diversity: Member States should impose requirements to effectively counter the potential discriminatory effects of AI systems deployed by both the public and private sectors, and to protect individuals from their negative consequences.
- Principle of Transparency and Explainability of AI Systems: Individuals should have the right to a meaningful explanation of how an AI system functions, what optimisation logic it follows, what type of data it uses, and how it affects one’s interests, whenever it generates legal effects or has similar impacts on individuals’ lives. The explanation should be tailored to the particular context, and should be provided in a manner that is useful and comprehensible for an individual.
- Data Protection and the Right to Privacy: Member States should take particular measures to effectively protect individuals from AI-driven surveillance, including remote biometric recognition technology and AI-enabled tracking technology, as this is not compatible with the Council of Europe’s standards on human rights, democracy and the rule of law.
- Accountability and Responsibility: Developers and deployers of AI should identify, document, and report on potential negative impacts of AI systems on human rights, democracy and the rule of law, and put in place adequate mitigation measures to ensure responsibility and accountability for any harm caused. Member States should ensure that public authorities are able to audit AI systems, including those used by private actors.
- Democracy: Member States should take adequate measures to counter the use or misuse of AI systems for unlawful interference in electoral processes, for personalised political targeting without adequate transparency mechanisms, and more generally for shaping voters’ political behaviours and manipulating public opinion.
- Rule of Law: Member States should ensure that AI systems used in justice and law enforcement are in line with the essential requirements of the right to a fair trial. They should pay due regard to the need to ensure the quality, explainability, and security of judicial decisions and data, as well as the transparency, impartiality, and fairness of data processing methods.
The Study recommends that the Council of Europe establish a binding legal instrument (such as a convention) establishing the main principles for AI systems, which would provide the basis for relevant national legislation. It also suggests that the Council of Europe develop further binding or non-binding sectoral instruments with detailed requirements that address specific sectoral challenges of AI. The Study recommends that the proposed legal framework should pursue a risk-based approach by targeting specific AI application contexts, and acknowledging that not all AI systems pose an equally high level of risk.
Next Steps
The Study was adopted by CAHAI during its plenary meeting in December 2020. Next, it will be presented to the Committee of Ministers of the Council of Europe, who may instruct CAHAI to begin developing the specific elements of a legal framework for AI. This could include a binding legal instrument, as well as non-binding and sectoral instruments.
CAHAI’s work joins similar international initiatives looking to provide guidance and build a global consensus on the development and regulation of AI, including the OECD member states’ recent adoption of OECD Principles on AI — the first international AI standards agreed on by governments — and the establishment of the Global Partnership on Artificial Intelligence (GPAI) in June 2019. We anticipate further developments in this area in 2021, including the European Commission’s forthcoming proposals for AI legislation.
In particular, the principles and recommendations for further action set out in the Study share similar themes with ongoing EU initiatives on AI regulation, including the EU High-Level Working Group’s Ethics Guidelines for Trustworthy AI and the European Commission’s White Paper on AI. Like the Council of Europe’s Study, these initiatives propose a risk-based approach to regulating AI, centred on upholding fundamental human rights like non-discrimination, and ensuring that AI applications are developed and deployed in a trustworthy, transparent, and explainable manner
Stay tuned for further updates.
* The Council of Europe is an international organization that is distinct from the European Union. Founded in 1949, the Council of Europe has a mandate to promote and safeguard the human rights enshrined in the European Convention on Human Rights. The organization brings together 47 countries, including all of the 27 EU member states. Recommendations issued by the Council of Europe are not binding, but EU institutions often build on Council of Europe standards when drawing up legislation.