Police officers in the UK have spoken out about fears of bias existing in artificial intelligence (AI) software, a UK government-commissioned report has revealed.
The study points out the danger of such technology amplifying discrimination, leaving some groups more susceptible to being stop-and-searched in the street. Fears are made explicit about the potential for officers becoming too dependent on the automation that the software affords. The report also asks for clearer guidelines on the use-cases for facial recognition technology.
Speaking to the BBC news website, Alexander Babuta of the Royal United Services Institute (RUSI), said:
“The police are concerned that the lack of clear guidance could lead to uncertainty over acceptable uses of this technology.
“And given the lack of any government policy for police use of data analytics, it means that police forces are going to be reluctant to innovate.
“That means any potential benefits of these technologies may be lost because police forces’ risk aversion may lead them not to try to develop or implement these tools for fear of legal repercussions,” Babuta added.
The study, commissioned by the Centre for Data Ethics and Innovation (CDEI), took in the views of around 50 specialists, including senior police figures in England and Wales, legal experts, government officials and academics. The CDEI’s intention is to put together a code of practice in the New Year, which will outline how data analytics should be harnessed to best effect to help in police work.
A chief worry focused on relying on existing police data to teach AI mechanisms, because such resources might already be prejudiced by arresting officers’ views.
One officer stated:
“Young black men are more likely to be stopped and searched than young white men, and that’s purely down to human bias.
“That human bias is then introduced into the datasets and bias is then generated in the outcomes of the application of those datasets.”
The report also said that individuals from disadvantaged backgrounds would be more likely to use public services where more personal data would be collected, in turn making it more possible for such citizens to be identified as a risk by an AI system.
An interviewee said:
“We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there’s more policing going into that area, not necessarily because of discrimination on the part of officers.”
In response, the National Police Chiefs’ Council has underlined how police in the UK always strive to strike a balance between protecting civilian rights and keeping people safe.
“For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to explore new approaches to achieve these aims,” Assistant Chief Constable Jonathan Drake said.
“But our values mean we police by consent, so anytime we use new technology we consult with interested parties to ensure any new tactics are fair, ethical and producing the best results for the public,” he added.