Skip to content.

AI is changing the world, but how do we ensure we’re developing and using it ethically?

In September 2024, the new Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was opened for signature by the Council of Europe (CoE) – also known as the AI Convention or AI Treaty.

Following in the footsteps of the EU AI Act passed earlier this year, the AI Convention is the first international framework of its kind covering the ethical development and use of AI.

Scope and participation in relation to the EU AI Act

The AI Convention still shares some core principles with the EU AI Act, but has a few core differences:

Specific provisions

EU AI ActAI Convention
Covers detailed provisions on GPAI (General Purpose AI) system development and usageFocuses more on broader principles to ensure human rights, democracy and the rule of law are not impacted – it doesn’t specifically address GPAI systems

Scope and jurisdiction

EU AI ActAI Convention
Only applies to EU member states, emphasizing market regulation, product safety and consumer protections. It details specific risk categories with obligations by risk level, i.e., unacceptable, high, limited or minimalHas a potentially global reach, open for signature by the CoE and non-member countries alike. It uses a risk-based approach, but doesn’t establish specific risk categories for AI systems – instead, risk is assessed in relation to AI’s potential impact on human rights, democracy and the rule of law

Legal nature

EU AI ActAI Convention
A detailed, prescriptive regulation with specific requirements and prohibitions to comply with if you operate within the EU. It has specific enforcement mechanisms and non-compliance penaltiesEstablishes broad commitments to AI system use, risks and potential impact. It leaves implementation and specific compliance details down to national legislation aligned with the Framework Convention, with a possibility of independently decided penalties, bans or other measures if AI use conflicts with agreed provisions

What else does the Framework Convention on AI cover?

The AI Convention intends to allow wider international adoption and cooperation around AI regulation. Though these requirements could be considered less strict than the EU AI Act, signatories of the Framework indicate their commitment to:

  • Prioritizing human rights, including respect for human dignity, non-discrimination and data protection and privacy in the use and development of AI
  • Transparency requirements for AI-generated content and disclosure around interactions with AI systems
  • A risk-based approach to AI regulation, specifically on the risks and potential impact of AI systems on human rights, democracy and the rule of law
  • A focus on human-supervised AI, emphasizing on transparency, accountability, documentation of use and oversight
  • Principles for trustworthy AI that prioritizes safety and data governance
  • Documentation obligations for high-risk AI systems
  • Support for safe innovation through regulatory sandboxes

Another important element of the AI Convention is that it offers a further degree of flexibility for private sectors, giving them the option to apply the obligations set by the AI Convention or to implement alternative appropriate measures.

It also includes some exemptions for research and development and activity involving national security – provided that the human rights obligations still apply.

When will the Framework Convention come into force?

The AI Framework Convention has already garnered signatures from key players like the United States, Canada, Japan, Australia and the UK – and the door remains open for more countries to join the effort in shaping a responsible AI future.

However, the AI Convention requires specific conditions to be met before it can enter into force:

  1. At least three of the signatories must be CoE member states
  2. It needs at least five signatories to ratify it. This is a separate process that typically involves domestic legislative approval
  3. A three-month waiting period after the conditions are met before the treaty goes into force, on the first of the month after this period passes

As of the time of writing in October 2024, the current signatories of this framework have not yet completed this ratification process.

Criticism and next steps

While this landmark treaty highlights the growing global focus on ethical AI, concerns have been raised about the enforceability of, and potential loopholes in, the AI Convention.

In a recent article in Reuters, Francesca Fanucci, a legal expert at the European Center for Not-for-Profit Law (ECNL) who was involved in the treaty’s drafting process, has cautioned that the agreement’s principles are so broad that they may be difficult to enforce in practice. She highlights exemptions for national security and limited oversight of private companies as particular areas where double standards are a concern.

It remains to be seen how these concerns will be addressed. Meanwhile the CoE Secretary General, Marija Pejčinović Burić, has urged more countries to sign the AI Convention, pressing for those who have already signed to ratify it as soon as possible so it can enter into force.

Want to stay informed on Framework Convention on AI and other important legislative updates? Don’t forget to subscribe to our blog!

For more about artificial intelligence, check out our other articles below:

Read more about AI