Lawyers may soon have to use artificial intelligence, the second most senior judge in England and Wales has said.
In a speech last Friday, Sir Geoffrey Vos argued that using AI was unlikely to be optional in the future:
First, clients will not want to pay for what they can get more cheaply elsewhere. If generative AI can draft a perfectly serviceable contract that can be quickly amended, checked and used, clients will not want to pay a lawyer to draft one instead.
Secondly, in a similar vein, if AI can summarise the salient points contained in thousands of pages of documents in seconds, clients will not want to pay for lawyers to do so manually…
Thirdly, and perhaps more importantly, AI is not only quicker, but may do some tasks more comprehensively than a human adviser or operator can do. The consequence of this reality is that we may need to reconsider the way in which the common law applies to a vast range of activities.
“One may ask rhetorically,” added Vos, “whether lawyers and others in a range of professional services will be able to show that they have used reasonable skill, care and diligence to protect their clients’ interests if they fail to use available AI programmes that would be better, quicker and cheaper.”
His speech to the Manchester Law Society last Friday afternoon was supported by AI-generated images. Unfortunately, these were not included in the version of his remarks that appears on the judiciary website.1
One image purported to show Vos, who as master of the rolls is head of civil justice in England and Wales, sitting in an AI-technology enabled court. He had then asked AI to create an image showing lawyers who were alarmed by AI. Vos himself was alarmed to see that it showed only men.
AI judging
Not for the first time, Vos was unwilling to predict whether AI was likely to used for any kind of judicial decision-making. But, he added pointedly, “when automated decision-making is being used in many other fields, it may not be long before parties will be asking why routine decisions cannot be made more quickly and, subject to a right of appeal to a human judge, by a machine”.
He recalled guidance for judges issued by the senior judiciary last December on the use of AI. That advice applied just as much to lawyers, he added.
It had three main principles:
Before using generative AI, you need to understand what it does and what it does not do. Generative AI does not generally provide completely reliable information, because the LLM [large language model] is trained to predict the most likely combination of words from a mass of data. It does not check its responses by reference to an authoritative database. So, be aware that what you get out of an LLM may be inaccurate, incomplete, misleading or biased.
Lawyers and judges must not feed confidential information into public LLMs, because when they do, that information becomes theoretically available to all the world. Some LLMs claim to be confidential, and some can check their work output against accredited databases, but you always need to be absolutely sure that confidentiality is assured.
When you do use a LLM to summarise information or to draft something or for any other purpose, you must check the responses yourself before using them for any purpose. In a few words, you are responsible for your work product, not ChatGPT.
Comment
Vos is by far the most technologically advanced judge in the United Kingdom. For some years now, he has operated not just a paperless office but also on a paperless courtroom.
Other leading judges have been content to let him lead the technological revolution, relieved that such a senior member of the judiciary understands the potential not only of AI but also of digitising the courts.
As Vos said in his Manchester speech, he chairs “the new online procedure rules committee which is going to make rules and provide data standards for both the online court processes and the pre-action online dispute resolution processes” within what he has described as the “funnel” of civil justice.
But he would be the first to agree that court users and external advisers have an important part to play in designing new systems. Vos knows that he needs widspread support if his online court is to funnel claims into the correct channels. And that kind of support cannot be generated artificially.
Don’t miss Law in Action on Radio 4 at 4pm today and then on BBC Sounds. We’re devoting the whole programme to methods of diverting young people from knife crime, some of which have been unexpectedly counter-productive.
Among those taking part is a man in his 30s whose youth worker expected he would be stabbed to death before he turned 16.
Also in the programme are:
Bruce Houlder KC, founder of Fighting Knife Crime London
Dr Peter Neyroud from the Cambridge Institute for Criminology
Dr Charlotte Coleman, a forensic psychologist at Sheffield Hallam University
Robin Lockhart from Catalyst in Communities
My attempts to generate artificial images of Vos delivering his lecture were equally unsuccessful.
“when automated decision-making is being used in many other fields, it may not be long before parties will be asking why routine decisions cannot be made more quickly and, subject to a right of appeal to a human judge, by a machine"
The danger with this is whether it could work for more than one generation of lawyers. We trust the right of appeal to the human judge precisely because we assume the human judge has acquired all the requisite knowledge and skills the old fashioned way. There's only so much thinking and research that can be outsourced without losing the ability to think and research.
There seems to be an internal inconsistency. Vos suggests AI programmes to draft documents or review documents, but the judicial guidance points out (correctly) that "Generative AI does not generally provide completely reliable information" and "does not check its responses by reference to an authoritative database. So beware that what you get out of an LLM may be inaccurate, incomplete, misleading or biased". It goes on to remind us that "you are responsible for your work product, not ChatGPT." The consequence is that the client must pay for the lawyer's investment in the AI programme, and then the lawyer has to check the work it produces anyway. It would surely be quicker and cheaper if the lawyer did the work in the first place. Inevitably when you are checking the work of another you assume it will be correct, and so the chances of missing errors increases. It is a bit like relying on a pupil's draft! Not always sensible or reliable. I fear for insurance premiums if reliance on AI becomes common.