Police defend facial recognition target selection to Lords

As UK police look to increase their adoption of live facial recognition, a House of Lords committee has heard that serious issues remain around the proportionality and necessity of the way forces are using the technology.

On 12 December 2023, the Lords Justice and Home Affairs Committee (JHAC) – which has launched a short follow-up inquiry into the use of artificial intelligence (AI) by UK police, this time looking specifically at live facial recognition (LFR) – heard from senior Metropolitan Police and South Wales Police officers about the improving accuracy of the technology, as well as how both forces are managing their deployments.  

Claiming there was a “very clear focus” on the most serious criminality, they also told the Lords about the operational benefits of LFR technology, which includes the ability to find people they otherwise would not be able to and as a preventative measure to deter criminal conduct.

At the same time, they confirmed that both forces use generic “crime categories” to determine targets for their live facial recognition deployments, bringing into question claims that their use of the technology is concentrated on specific offenders who present the greatest risk to society.

Academic Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School, challenged the proportionality and necessity of this approach during the evidence session, claiming the coercive power of the state means police must be able to justify each entry to the watchlists based on the specific circumstances involved, rather than their blanket inclusion via “crime types”.

The new inquiry follows a 10-month investigation into the use of advanced algorithmic technologies by UK police, including facial recognition and various crime “prediction” tools, which concluded with the JHAC describing the situation as “a new Wild West”, characterised by a lack of strategy, accountability and transparency from the top down.

In a report published in March 2022, the committee said: “The use of advanced technologies in the application of the law poses a real and current risk to human rights and to the rule of law.

“Unless this is acknowledged and addressed, the potential benefits of using advanced technologies may be outweighed by the harm that will occur and the distrust it will create.”

Throughout the inquiry, the JHAC heard from expert witnesses – including Yeung – that UK police are introducing new technologies with very little scrutiny or training, continuing to deploy new technologies without clear evidence about their efficacy or impacts, and have conflicting interests with their own tech suppliers.

In July 2022, however, the UK government largely rejected its findings and recommendations, claiming there was already “a comprehensive network of checks and balances”.

‘Clear focus on serious crime’

During the follow-up session, temporary deputy chief constable Mark Travis, the senior responsible officer (SRO) for facial recognition at South Wales Police, said the force had a “very clear focus” in deploying the technology to deal with “the most serious crime, the most serious vulnerability”.

Más contenido para leer:  Alerta sobre campaña cibernética china dirigida a redes críticas

Giving the example of large-scale events like football matches or concerts, he said the technology could be used “to identify people who may be coming to that venue with the intent of committing crime – that could be serious crimes such as terrorism, it could be crime against vulnerable people, against young girls and women.”

Travis added that the threshold for deployment had been kept deliberately high, with the decision to deploy ultimately taken by an officer with a rank of assistant chief constable or above, to ensure the benchmarks for necessity and proportionality are met in every instance.  

“We have deployed our facial recognition system 14 times in the last year, and … we’ve reviewed 819,943 people and had zero errors. It’s small in the number of deployments, it’s small in intrusion [to rights], and its high quality in terms of its accuracy”
Mark Travis, South Wales Police

It should be noted that before UK police can deploy facial recognition technology, they must ensure the deployments are “authorised by law”, that the consequent interference with rights (such as the right to privacy) is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate.

For example, the Met Police’s legal mandate document – which sets out the complex patchwork of legislation that covers use of the technology – says the “authorising officers need to decide the use of LFR is necessary, and not just desirable, to enable the Met to achieve its legitimate aim”.

Travis added that once a decision has been made to deploy LFR, the force has around 20 specialist officers trained in the equipment to determine whether or not the system is making accurate matches. If they determine a match is accurate, he said the information is then passed to an officer outside the facial recognition van who will then engage the member of public.

“We have deployed our facial recognition system 14 times in the last year, and … we’ve reviewed 819,943 people and had zero errors,” he said. “It’s small in the number of deployments, it’s small in intrusion [to rights], and its high quality in terms of its accuracy.”

In April 2023, research commissioned by the police forces found “substantial improvement” in the accuracy of their systems when used in certain settings.

Speaking about her own force’s approach to deploying LFR, Met Police director of intelligence Lindsey Chiswick highlighted the Met’s use of the technology in Croydon on 7 December: “Croydon this year has the highest murder rate, it’s got the highest number of knife crime-related offences, and it’s got a really blossoming nighttime economy, and with that comes problems like violence against women and girls.

She added that once this “intelligence case” has been established and the decision to deploy has been taken, a watchlist is then pulled together off the back of that picture.

“The watchlist is pulled together not based on an individual, but based on those crime types that I just talked about. It’s then taken to approval from an authorising officer. In the Met, the authorising officer is superintendent-level or above,” she said.

Más contenido para leer:  La encuesta de Google Cloud muestra los beneficios empresariales de GenAI

Chiswick added that seven people were arrested as a result of the 7 December deployment, including for rape and burglary, criminal damage, possession of Class A drugs, suspicion of fraud, failing to appear for a road traffic offence, and someone on recall to prison for robbery.

“So seven significant offences and people found who were wanted by the police who we wouldn’t have otherwise been able to find without the technology.”

She added that, in any given morning briefing, officers may see 100 images of wanted people the police are searching for. “There’s no way as an officer could remember those individual faces. All this technology does is up the chances of plucking that face out of a crowd. What comes after that, at that point, is normal policing,” she said.

Chiswick revealed that over the course of the Met’s last 19 LFR deployments, there have been 26 arrests and two false alerts.

Picking up on Chiswick’s claim that once someone is identified by LFR “normal policing kicks in”, committee chair Baroness Hamwee asked whether the scale of the tech’s use would meet the tests for necessity and proportionality, given that, “by definition, it’s bigger than a couple of officers who happened to be crossing Oxford Circus”.

Chiswick said: “There’s a balance here between security and privacy. For me, I think it’s absolutely fair, and the majority of the public – when we look at public surveys, between 60 and 80% of the public are supportive of enforcement using the technology.”

However, when the Lords noted those were national surveys that might not reflect the views of different local communities towards policing, she added that “when you drill down into specific community groups, that support does start to drop” and that there would be community engagement pre-deployment in an attempt to quell any fears people might have.

In line with Chiswick’s previous appearance before the committee, in which she highlighted the technology’s “deterrence effect”, Travis also said LFR can “deter people coming to a location … to cause harm to other people” and that it helps to create “a safe environment”.

He gave the example of a pop concert, where young people might be concerned about people with “an intent to young people” also going to that concert, but the use of LFR might prevent them coming because the technology meant they would be identified by police.

Delivering the promises?

According to Yeung, the technology needs to be assessed in two ways: one, whether the functional performance of the software works as claimed, and two, whether or not it delivers the benefits promised in real-world settings.

“What I want to focus on is the question of operational effectiveness … [and whether] it will actually deliver the promised benefit in your specific contextual circumstances,” she said. “This is where we need to keep in mind that there’s a world of difference between accurate functional performance of matching software in a stable laboratory setting, and then in a real-world setting.”

Más contenido para leer:  La inversión en el sector de 'tecnologías limpias' del Reino Unido alcanzó un nuevo máximo en 2021 a medida que aumenta la preocupación por el cambio climático

Yeung added: “While there is every reason to believe the software is getting better in terms of accuracy … there is still a massive operational challenge of converting a match alert into the lawful apprehension of a person who is wanted for genuine reasons that match the test of legality.”

To achieve this, she said officers on the ground need to be “capable of intervening in complex dynamic settings”, but warned that “the mere fact an alert has been generated does not in and of itself satisfy the common law test of reasonable suspicion”.

“So when a police officer stops a person in the street, they do so on the basis of voluntary cooperation of that person producing identification, because the mere fact of a facial recognition alert match is not enough, in my view, to satisfy the reasonable suspicion test.”

Pointing to a number of LFR-related arrests for shoplifters and small drug offences during the Met’s early 2022 deployments, Yeung said: “There is a divergence between the claims that they only put pictures of those wanted for serious crimes on the watchlist, and the fact that in the Oxford Circus deployment alone, there were over 9,700 images.”

Responding to Yeung’s challenge on how the proportionality and necessity of including each image was decided, Chiswick said the watchlists in London are so large because there are “a lot of wanted offenders in London that we’re trying to find”.

She further explained how watchlists are selected based on the crime type categories linked to the images of people’s faces, rather than based on intelligence about specific people.

“In a liberal, democratic society, it’s essential that the coercive power of the state is used to justify every single decision about an individual. The fact that we’re categorising people but we’re not actually evaluating each specific person troubles me deeply”
Karen Yeung, Birmingham Law School

Travis also confirmed that watchlist images are selected based on categories: “It’s not an individual identification for a person for each deployment.”

Yeung pointed out that there “seems to be a bit of  mismatch” between claims the watchlists are only populated by people wanted for serious crimes and the reality that people are selected based on the crime category attached to their image.

“In a liberal, democratic society, it’s essential that the coercive power of the state is used to justify every single decision about an individual,” she said. “The fact that we’re categorising people but we’re not actually evaluating each specific person troubles me deeply.”

Chiswick responded that whether or not something is “serious” depends on the context, and that, for example, retailers suffering from prolific shoplifting would be “serious for them”.

She added that each watchlist is deleted after each operation, and that the processing of the biometric information of people who are not matched is “fleeting, instantaneous”, as the image captured is automatically and immediately deleted.

Yeung…

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales