the framework convention on ai is coming what does it mean for your organisation graphic

The Framework Convention on AI is Coming: What Does it Mean for Your Organisation?

The speed at which AI innovation is moving has caught many by surprise. In some countries, lawmakers are struggling to keep up, oscillating between self-regulation (of which the US is an advocate) and a more hands-on approach (e.g., the EU AI Act). But most now recognise the potential for the technology to undermine human rights and the rule of law. A new Council of Europe Framework Convention on Artificial Intelligence addresses these concerns.

It aims to complement existing global standards on human rights, democracy and the rule of law while addressing any legal gaps stemming from rapid AI technological advances. Yet is its technology-agnostic, vaguely worded approach likely to have the kind of impact its proponents claim?

What Is It And Why Now?

Five years in the making, the convention is described as the “first-ever international legally binding treaty” governing AI. It was written by the 46 member states of the Council of Europe (including the UK), the EU, and 11 non-member states, including Australia, Japan, and the US.

It can be seen in the context of a growing number of diverse government efforts to regulate AI in a way that mitigates emerging risks. These include President Biden’s Executive Order on AI from October 2023, the Bletchley Declaration a month later (November 2023), and the King’s speech announcement that the British government plans to introduce AI legislation to regulate powerful AI models.

How Does It Differ from the EU AI Act?

According to the Council of Europe, the convention includes several fundamental principles that must govern the lifecycle of AI systems:

  • Human dignity and individual autonomy
  • Equality and non-discrimination
  • Respect for privacy and personal data protection
  • Transparency and oversight
  • Accountability and responsibility
  • Reliability
  • Safe innovation

Signatories must document relevant information on AI systems and make it available to anyone affected. This information must be detailed enough so people can challenge decisions made via AI or even the use of AI itself. They must also be able to lodge a complaint with the authorities. Those authorities must provide “effective procedural guarantees, safeguards and rights” to anyone affected by AI that may impact their human rights and freedoms. The Council says notice should also be given to users of AI that they’re interacting with non-human intelligence.

The convention demands that states also carry out risk and impact assessments related to AI’s impact on human rights, democracy, and the rule of law. As a result, they must establish “sufficient prevention and mitigation measures” and even introduce bans or moratoria on certain AI applications.

So, how does this differ from the EU AI Act? Most obviously, it applies to nation-states rather than private businesses – although it also impacts private entities acting on behalf of governments. Although both aim to protect human rights in the context of AI use, they are “distinct pieces of legislation with very different bases in law”, according to Sarah Pearce, partner at Hunton Andrews Kurth.

“The EU AI Act is a piece of legislation enacted by the European Union which will be enforced directly. The AI Treaty is an international convention signed by various nation-states. Signatories commit to certain principles/obligations and to work with legislators and regulators at a national level to implement and enforce those principles/requirements,” she tells ISMS.online.

“The principles of the AI Convention seem rather broad and require further action by nation-states to ensure implementation and enforcement, so its effectivity is questionable at this stage. By contrast, the EU AI Act is legislation in force and contains a set of legal requirements that organisations in scope have to comply with or risk sanctions for non-compliance. It also includes provisions as to how the legislation will be enforced.”

Will It Make a Difference?

There are several potential challenges with the convention, according to a Bird & Bird analysis:

  • States can choose how to apply its rules to private actors – directly or via “other appropriate measures”. This could lead to discrepancies in how it is applied across the globe, as could the fact that the term “public authority” is not defined in the text.
  • In general, the convention sticks to broad principles rather than specific requirements, meaning that when it is transposed into local laws, there may be a wide variance in regulations.
  • Although compliance reporting is required, no strict enforcement criteria render the convention somewhat toothless.
  • No remedies, such as fines, have been suggested for breaches of human rights related to the convention – meaning that these could also vary significantly between jurisdictions.

“In all likelihood, most organisations will find compliance with convention challenging on account of the imprecise language and broad duties,” Cripps partner, Matthew Holman tells ISMS.online.

“The EU says that it has done this by implementing the EU AI Act, and it is basically correct on that point. The UK government says that existing national laws already address the points coming from the convention, so no standalone act is needed. Whether it is right on that point is very much up for debate, but I would be inclined to disagree.”

 What Are the Next Steps?

The UK government appears keen to uphold its position as a leader in AI safety. It claims existing laws and measures will be “enhanced” once the treaty is ratified and that it will work closely with “regulators, the devolved administrations, and local authorities” to implement the new requirements.

Until then, says Holman, both public and private sector organisations potentially impacted by the convention should ensure that any AI development aligns with the Council of Europe’s core values of human rights, democracy, the rule of law, and transparency.

“Public authorities and private actors should aim to enhance the transparency of processes and collaborate with public authorities to create a framework for industry-standard ethical practices,” he concludes.

Streamline your workflow with our new Jira integration! Learn more here.