Stacey, Terrence and Denise
Could artificial intelligence give people bad names and put them behind bars?
The use of artificial intelligence (AI) in the justice system may:
have a chilling effect on human rights,
undermine fair trials,
weaken the rule of law,
exacerbate existing inequalities, and
fail to produce efficiency gains,
the House of Lords justice and home affairs committee says in a report published this morning.
“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” asked Lady Hamwee, the Liberal Democrat solicitor who chairs the committee. “It was different technology — but look at what happened to hundreds of Post Office managers.
Hamwee’s committee began its work on the assumption that AI, used correctly, had the potential to improve people’s lives:
But while acknowledging the many benefits, we were taken aback by the proliferation of AI tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.
When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained?
Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening.
Instead, we uncovered a landscape — a new Wild West — in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market.
And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality — despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven.
We learnt that there is no central register of AI technologies, making it virtually impossible to find out where and how they are being used; or for parliament, the media, academia and, importantly, those subject to their use to scrutinise and challenge them. Without transparency, there can not only be no scrutiny but no accountability for when things go wrong. We therefore call for the establishment of a mandatory register of algorithms used in relevant tools…
Thanks to its ability to identify patterns within data, AI is increasingly being used in “predictive policing”: forecasting crime before it happens.
AI therefore offers a huge opportunity to better prevent crime — but there is also a risk it could exacerbate discrimination. The committee heard repeated concerns about the dangers of human bias contained in the original data being reflected, and further embedded, in decisions made by algorithms.
As one witness told us: “We are not building criminal risk assessment tools to identify insider trading or who is going to commit the next kind of corporate fraud… We are looking at high-volume data that is mostly about poor people.”
Sham marriage algorithm
The committee was told about an automated triage system used by the Home Office and known as the sham marriage algorithm:
When a couple (of whom at least one is not a “relevant national” or lacks appropriate immigration status or a valid visa) gives notice to be married, an algorithm sorts them into a “red” or “green” category. A red light is a flag for an investigation and a human decision-maker then considers whether an investigation is needed.
The Public Law Project told us that “the detail of the human review stage is unclear. We do not know whether the human decision-maker exercises meaningful discretion.” An investigation can include interviews or home visits. If the couple does not comply with this investigation, they may not be allowed to marry.
While the decision on the genuineness of the marriage is in the hands of an official, the Public Law Project were concerned that the human decision-maker may fall victim to “automation bias”, defined as “a well-established psychological phenomenon whereby people put too much trust in computers”.
The stakes are high: both marriage and immigration status may be at risk.
Stacey, Terrence and Denise
Committee members were constantly warned about biases embedded into algorithms. According to the campaign group Big Brother Watch, systems could attribute demographic characteristics to different stereotypes:
“Families with needs” were profiled… with names like Stacey while “low-income workers” were typified as having… names like Terrence and Denise.”
Comment
I have to say that I do not fully understand some of this. Neither, I suspect, do some members of the justice and home affairs committee.
But that, of course, is the whole point of their report.
More than ever justice needs to be open. The post office scandal is indeed a timely warning. No computer system is error proof and should always be subject to challenge. I write as a life member of the British Computer Society