Artificial intelligence (AI) has become a prevalent topic in today’s society, sparking discussions and debates across various sectors. Mario Hernández Ramos, the Chair of the Committee on Artificial Intelligence at the Council of Europe, highlights the potential risks that AI poses to human rights. He also sheds light on the efforts made by the Council of Europe to address these risks effectively.
The regulation of AI has been a contentious issue, with differing opinions on whether strict regulations hinder innovation or provide necessary safeguards. It is essential to acknowledge that AI, like any technology, can have both positive and negative impacts on society and individuals. Regulation plays a crucial role in ensuring responsible use of AI technology.
The focus of the debate should shift towards determining what aspects of AI should be regulated, how they should be regulated, who should oversee the regulations, and from what ethical and legal standpoint. Different countries have adopted varying approaches to AI regulation, reflecting the complexity and diversity of the issue.
In the past, regulations around AI primarily consisted of voluntary codes of conduct within companies, lacking specific guidelines and enforceability. However, the Council of Europe recognized the need for a more comprehensive and binding regulatory framework to address the risks associated with AI.
The creation of the Artificial Intelligence Committee (CAI) by the Council of Europe marked a significant step in developing international regulations for AI. The committee, comprising representatives from member states, the private sector, civil society, and academia, worked towards drafting the first legally binding international treaty on AI and human rights, democracy, and the rule of law.
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, adopted in 2024, aims to ensure that AI activities align with fundamental principles such as human dignity, equality, transparency, and accountability. The convention provides a framework for signatory parties to develop specific regulations at the national level while promoting technological progress and innovation.
To monitor the implementation of the Framework Convention, a Conference of the Parties has been established to ensure compliance and effectiveness. Additionally, the Committee on Artificial Intelligence has developed the HUDERIA Methodology, a risk and impact assessment tool tailored to safeguard human rights, democracy, and the rule of law in AI systems.
Overall, the Council of Europe’s initiatives in regulating AI demonstrate a commitment to protecting human values and rights in the face of technological advancements. These efforts set international standards for addressing the challenges posed by AI and ensuring a responsible and ethical approach to its development and use.