The United Kingdom is to join a new European treaty intended “to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law”.
At a meeting today of justice ministers from the 46 Council of Europe states in Vilnius, Lithuania, the lord chancellor Shabana Mahmood will sign the Council of Europe framework convention on artificial intelligence and human rights, democracy, and the rule of law.
The treaty will commit states that have ratified it to ensure that
their AI systems are consistent with obligations to protect human rights;
their AI systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice; and
they have measures that seek to protect its democratic processes in the context of activities within the lifecycle of artificial intelligence systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions.
The new agreement is the outcome of two years’ work by an intergovernmental body that brought together the Council of Europe member states, the European Union and 11 non-member states — Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay. All these states are eligible to join it.
It sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation. The convention adopts what the Council of Europe described as a risk-based approach to the design, development, use and decommissioning of AI systems, requiring careful consideration of any potentially negative consequences.
Launching the treaty in May, the Council of Europe secretary general Marija Pejčinović said it was the first of its kind:
It is a response to the need for an international legal standard supported by states in different continents which share the same values to harness the benefits of artificial intelligence, while mitigating the risks. With this new treaty, we aim to ensure a responsible use of AI that respects human rights, the rule of law and democracy.
Mahmood said in a comment released this morning:
Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth.
However, we must not let AI shape us — we must shape AI.
This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.
Signing a treaty such as this indicates support for it. States ratify an agreement once they are ready to be bound by it. This treaty needs only to be ratified by five states, of which three must be Council of Europe members, for those states to be bound by it some three months later.
Once the treaty is ratified and brought into effect in the UK, the government said today, existing laws and measures will be enhanced. For example, aspects of the Online Safety Act 2023 will better tackle the risk of AI using biased data and producing unfair outcomes.
The government added:
AI is likely to bring significant benefits like boosting productivity and increasing cancer detection rates. But the new convention includes important safeguards against its risks, such as the spread of misinformation or using biased data which may prejudice decisions.
No specific enforcement mechanism is included in the treaty but disputes may be settled through a conference of the parties.
The treaty defines an artificial intelligence system as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments”.
Hmm?.... Of the AI convention, you say: ‘No specific enforcement mechanism is included in the treaty but disputes may be settled through a conference of the parties.’ What does that mean?
When I was an articled clerk (as we were called: you and I trained at the same time in London, but you went to Law School at Lancaster gate, and I to Guildford) my fellow articled clerk spoke of a partner in the firm as confusing work with activity. The partner rushed around, but achieved little. Are most conventions in much the same category. Sign them and it looks as if you are doing something; but a lack of enforcement means being replaced by a ‘conference of parties’ sounds like activity with little – perhaps nothing – achieved.
In my own field of family law United Nations Convention on the Rights of the Child 1989 Art 12 says that a child who is of age and understanding should be listened to:
1 States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.
2 For this purpose, the child shall in particular be provided the opportunity to be heard in any judicial and administrative proceedings affecting the child….
Family Proceedings Rules 2010 r 9.11 reflects this in slightly flaccid terms for financial proceedings. It says that in any application by a parent involving a ‘financial remedy’ – ie maintenance for a child – the court may order separate representation of a child and may direct appointment of a children's guardian. I wrote about the case of EL v ML [2023] EWFC 43 (B) (15 February 2023) (bailii.org) a couple of days ago (Who judges the judges? - by David Burrows (substack.com)). In that case His Honour Judge Hess (a circuit judge who sits at the family court in High Holborn) said of a father’s successful application to eliminate arrears of periodical payments ordered to be paid for his mature, but still dependant, sons:
[43] (i) I accept that the court has, in theory, the power to join a child to the proceedings under FPR 2010 Rule 9.11; but the proposition that DJ Cronshaw should have declined to approve the school fees part of the March 2021 Consent Order and instead join the children to seek their views "about whether, in truth, their father should pay school fees" [italics are his; and I do not know who he was quoting] is a surprising proposition. I asked Mr Burrows whether he had ever come a case where such a step had been taken and he was unable to identify such a case (which accorded with my own experience). I regard the suggestions that, in some way, the argument is assisted by an application of [Art 12] (if it be pursued) … is very surprising to see and I can identify no merit whatever in it.
The father had insisted the boys be privately educated at a leading boys private school. He lost his job when he had an affair with a work colleague. As far as Hess knew, the boys would have to leave their school. There was other cash available – or due to be available – to the father. In context Hess may have been right not to have made an order under r 9.11; but he might have considered it in more legalistic terms – ‘on the merits’ as lawyers say – and by reference to the welfare of the three boys.
My job requires me to read a lot of law reports on family law. With the departure of Lord Justice Ryder (a highly experienced children lawyer) from the Court of Appeal five years ago, I have seen no mention of Art 12 that I can remember from other judges of High Court or higher level.
[Sorry to be going on about EL v ML]
The timing of the news that the UK is join the Council of Europe’s framework convention on AI (the “AI Treaty”) could not be more interesting. For starters, Tory ministers were decidedly against the imposition of legislation to regulate AI use in the UK. Regulation was thought as being too continental, saddling British-tech innovation with constraints and defined legal frameworks. It was seen as risky business.
Ministers often boasted that the UK housed the most thriving AI ecosystem in Europe, the 3rd largest private investment for AI companies (totalling a staggering US$ 4.65 Billion in 2022. The Bank of England revealed that 70% of UK firms operating in the banking and financial services industry use advanced algorithms for machine-learning for the purposes of credit-scoring, fraud and money laundering detection, customer profiling, risk management.
In the meantime, behind the scenes, various issues were highlighted by domestic players including CMA, FCA, ICO and Ofcom concerning AI and algorithmic bias, the lack of fairness, transparency or accountability mechanisms, and a legislative abysm that was are not addressed by GDPR, nor the Equality Act.
Now enters Labour who decides to join the AI Treaty, proposed by (lo and behold!) the Council of Europe to address (lo and behold!) human rights, democracy and rule of law. It sounds, smells and reads like regulation is just around the corner, and rightly so!
Questions remain unanswered:
Does this (re)alignment with the Council of Europe's framework mean a substantial change to the Tory’s stance on regulation? Is Labour signalling a desire to be observed as partner to the Council of Europe and one that shares similar regulatory values and principles?
Ultimately, is this “diplomacy” dressed in Treaty ascension? A small olive branch, a tiny gesture to show a different captain is at the ship, and one that wishes to extend an invitation to mend the abysmal rift created between Westminster and Brussels post-Brexit? Perhaps time will tell.