AI: risks and opportunities
What judges hope AI can do for them — and what they think it can’t
Two weeks ago, judges were talking about the risks of artificial intelligence. Today, members of the judiciary offer a rare insight into the opportunities AI may bring.
They weren’t talking to me, I hasten to add. Instead, the judges agreed to take part in a series of academic focus groups led by Erin Solovey, Brian Flanagan and Daniel Chen. The authors’ eight-page paper is called “Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary”.
For the Law Society Gazette, I’ve analysed their findings and reported some of the most interesting judicial observations.
In your Gazette column,Joshua, you write “lose the ability to read….,,” I respectfully agree save that for “ability” a yet more appropriate word might perhaps be “inclination” or even more to the point “discipline”?
Resilience and judicial independence of mind will always, surely, be required, since otherwise it might be a case of unrelenting pressure just to “get through” the work? That would lead to the administrative tail wagging the judicial dog.
As a lawyer doing advice work (rather than transactional work), I was glad to read that judges think that the human element is important. I do too. When someone has been laid off work, for example, they want to talk things through with a human, rather than simply get an ‘answer’ from AI about their rights.
As a lawyer I read judgments, often valuing the nuances I find in them. I expect that AI would likely accurately tell me that a decision was X or Y. But I see cases where the judge decided X, but clearly thought the correct answer should be Y if only the judge was not bound by precedent.