Lawyers and judges must embrace generative artificial intelligence, the head of civil justice for England and Wales said yesterday.
Speaking at the LawtechUK Generative AI Event, the master of the rolls Sir Geoffrey Vos gave three reasons:
Those we serve are using it: all other industrial, financial and consumer sectors will be using AI at every level.
Wider, cheaper, faster: it will make what we do available to more people, more cheaply, and allow us to do necessary things more quickly.
AI creates work for lawyers: one of the biggest fields of legal activity in years to come is likely to be claims for the negligent or inappropriate use — or failure to use — AI.
Vos, the second most senior judge in England and Wales, has spoken about AI for some years now. He appeared somewhat irritated at responses he receives:
Only last week, I spoke at the launch of Justice’s AI in our Justice System report. I was struck by the reactions of some of the lawyers in the audience: nodding vigorously when the risks of AI are mentioned and freezing when it was suggested that even lawyers might have to find ways to use AI to expedite and reduce the cost of both legal advice and dispute resolution…
Whenever I say that generative AI will save lawyers time and money, someone pipes up with the example of a lawyer who used GenAI to write submissions which included a fictitious case reference. The first and best example of that was the hapless Steven Schwartz in New York, who got his comeuppance from Judge P. Kevin Castel (who I have recently met). But that is what I mean. We should not be using silly examples of bad practice as a reason to shun the entirety of a new technology.
AI tools were not inherently problematic, the master of the rolls explained, so long as they were used appropriately:
Before using generative AI, you need to understand what it does and what it does not do. Large Language Models (LLMs) are trained to predict the most likely combination of words from a mass of data. Basic GenAI does not check its responses by reference to an authoritative database.
You must avoid inputting confidential information into public LLMs, because doing so makes the information available to the world. Some LLMs claim to be confidential, and some can check their work output against accredited databases, but it is paramount that that confidentiality is always guaranteed.
When you do use a LLM to summarise information, draft a document or for any other purpose, you must carefully review its responses before using them elsewhere. In a few words, you are responsible for your work product, not ChatGPT.
Vos noted that the Supreme Court of New South Wales had issued a practice note on generative AI last week. It says:
Gen AI must not be used in generating the content of affidavits, witness statements, character references or other material that is intended to reflect the deponent or witness’ evidence and/or opinion, or other material tendered in evidence or used in cross examination.
Where Gen AI has been used in the preparation of written submissions or summaries or skeletons of argument, the author must verify, in the body of the submissions, summaries or skeleton, that all citations, legal and academic authority and case law and legislative references exist, are accurate and are relevant to the proceedings.
Subject to exceptions, Gen AI must not be used to draft or prepare the content of an expert report (or any part of an expert report) without prior leave of the court.
This was more restrictive than the approach taken in England and Wales, he observed. “AI is already being used in many jurisdictions for some of the purposes that the New South Wales guidance says it should not be,” Vos said. “I doubt we will be able to turn back the tide. Our guidance is within the grain of current usage, making clear that the lawyers are 100% responsible for all their output, AI generated or not.”
Were there some decisions that are best left to machines? On this, Vos noted, opinions differed:
The legal community — internationally, not just here in the UK — needs to consider what kinds of advice and decision-making should and should not be undertaken by a machine.
I suggested in my Blackstone lecture that it was fairly obvious that people would never have the requisite confidence in peculiarly human decisions, like whether children should be removed from their parents, being made by machines. But some of the distinguished Oxford academics present questioned my assumption. They thought that emotive decisions of that kind would be just the type of decision-making that parents would really prefer to be taken out of human hands.
I don’t know who is right. But what the disagreement shows is that we need to start an urgent and wide-ranging discussion about what we want machines to do, and more importantly what we feel that machines should not be allowed to do.
But Vos himself was in no doubt that lawyers and judges would have to embrace AI, “albeit cautiously and responsibly, taking the time that lawyers always like to take before they accept any radical change”.
Perhaps one day a group of LLMs overheating in the legal ‘cloud’ will appoint a Master of the Bots…..
Who were the "distinguished Oxford academics present" who "thought that emotive decisions of that kind would be just the type of decision-making that parents would really prefer to be taken out of human hands."? I find this quite odd to say the least, and I would like to know the empirical research basis for it