AI will create a thousand Post Office scandals

At the same time that UK political parties vie in their condemnation of the Post Office scandal, they unite in their promotion of AI as the answer to tricky social problems. This means that, in effect, they are arguing for more of the same; for more occasions where computing and bureaucracy combine to mangle the lives ordinary people, but scaled by AI in ways that make Horizon’s harms look like small beer.

One thing the Horizon IT system and AI have in common is their fallibility; both are complex systems which generate unpredictable errors. However, while the bugs in Fujitsu’s bodged accounting system stem from shoddy software testing, AI’s problems are foundational.

The very operations that give AI it’s ‘wow factor’, like recognising faces or answering questions, also make it prone to new kinds of failure modes like out-of-distribution errors (think Tesla self-driving car crashes) and hallucinations.

Moreover, thanks to the internal complexity of their millions of parameters, there’s no ironclad way to figure out why an AI system came up with a particular answer. AI doesn’t even need to get to court to create problems of legality; this inherent opacity is the antithesis of any kind of due process.

Language models like ChatGPT also make unreliable witnesses because they are actually trained to produce untruths. Such systems aren’t optimised on facts but on producing output that’s plausible (a very different thing). Even when they sound right, they are literally making things up.

Woe betide the unwary citizen that turns to AI itself for legal advice; many have already been roasted by unsympathetic judges when it turns out they cited fabricated case law.

AI also amplifies the other dimension of the Post Office scandal – the sustained institutional cruelty towards the sub-postmasters.

Like bureaucracy, AI’s algorithms are a way of organising large systems where abstractions create a wall between a system and those it is applied to, so that the latter are reduced to a collection of disparate labels and categories. Perhaps it’s not surprising, then, that the synergy of state institutions and algorithms has already shown a tendency to scale structural violence.

In the Netherlands, an algorithm falsely accused tens of thousands of families of defrauding the child benefits system – ordered to repay the money, many were left with crippling debts and social exclusion.

In Australia, the Robodebt algorithm labelled 400,000 people as guilty of welfare fraud. This also led to innumerable ruined lives as privatised debt collectors pursued people on the margins, many of whom already had disabilities or mental health issues. As with the Post Office, the Robodebt scheme was known internally to be flawed but was defended to the hilt for years via institutional, political and legal bullying.

Many families targeted by the Dutch algorithm were from minority communities, and it seems the Post Office prosecutions also came with a hefty dose of racism. Their own internal investigation assigned archaic racial codes like ‘Chinese/Japanese types’, ‘Dark Skinned European Types’ and ‘Negroid Types’ to suspect sub-postmasters.

A move to AI systems will reproduce this kind of racial discrimination at an industrial scale, as AI not only ingests the racism embedded in its training data but projects it through reductive classifications and exclusions. And yet, despite the well-documented problems with AI, politicians of all stripes are committed to its mass adoption.

The unwavering belief that sci-fi tech can solve social challenges is captured by the Prime Minister’s claim to “harness the incredible potential of AI to transform our hospitals and schools”, somehow imagining that this will substitute for shortages of teachers and properly paid medical staff or fix the literally collapsing ceilings in the buildings.

The Labour Party, meanwhile, proposes AI as a way to tackle the rise in school absenteeism; another case of taking a complex issue involving vulnerable families and replacing the much needed care with the calculative power of cloud-based computation.

While some of this is the usual attempt to grab news headlines, there are deeper ideological commitments at play. AI is seen as the way to revive the economic system by intensifying trends that have been playing out since the 1970s; reducing job security by replacing workers with automation, and privatising the remaining public services.

To promote AI is privatisation by the back door, as it inevitably means a transfer of control to tech corporations. However, this handover to AI will generate more miscarriages of justice as it proceeds to override the voices of those whose lives it affects.

If we only listened to the very public statements by Silicon Valley figureheads like Sam Altman, Marc Andreessen and Peter Thiel and their visions for the coming society, we would realise that they too, like the Post Office prosecutions, are “an affront to public conscience”.

The Horizon IT scandal, despite its very real horrors, will come to seem quaintly English by comparison with the collateral damage caused by their transhumanist techno-fantasies.

A little known detail of the Post Office scandal is that, due to some astoundingly bad decisions made in the 1990s, English law presumes that computer evidence is reliable. This at least is fixable by a simple switch of perspective; computer evidence should not be trusted unless evidence can be produced as to its reliability. There is simply no comprehensive metric or test that can be put before a court to remove reasonable doubt that an AI is making things up.

However, we can’t wait for AI-driven scandals to come to court before recognising this, because by then the harm will be done.

AI can’t be trusted and should be kept out of any decision-making that might affect peoples’ lives, no matter how modest.

The open question is how we should enact such protections. The common theme of Horizon and the other algorithmic injustices is the merging of bureaucracy and computation into a machinery that ran amok over the ‘little people’.

In all these cases the supposed checks and balances completely failed to hold the system to account. It seems unlikely that anaemic measures like the algorithmic audits proposed in the freshly-inked EU AI Act are going to have any real traction.

The powerful lesson of the Robodebt and Post Office debacles is that the machine is only stopped by ordinary people coming together. In both cases, it was the self-organisation of people affected and their allies which threw the necessary spanner in the works but only, of course, after immense effort and suffering.

Surely it’s better to recognise early on how unjustified confidence in algorithmic authority becomes cover for what the Australian Royal Commission called “venality, incompetence and cowardice”.

Now, not later, is the time to challenge the ideology of ‘AI everywhere’ and push instead for people-centred solutions.

 

Exit mobile version