A court in the Netherlands has brought a stop to governmental use of artificial intelligence (AI) to identify fraud cases, because the practice violates human rights.
The decision has been welcomed by privacy activists throughout Europe, who are concerned by increasing government reliance on risk modelling within welfare benefits and similar services.
The broader fear among human rights campaigners is that such behaviours are an increasing worry in digitizing societies, where a lack of oversight and regulation can lead to vulnerable people being unfairly penalized or even spied upon.
As reported in the Guardian online, Stephen Timms, chairman the House of Commons work and pensions select committee, said:
“This ruling by the Dutch courts demonstrates that parliaments ought to look very closely at the ways in which governments use technology in the social security system, to protect the rights of their citizens.”
The Dutch court’s decision had the approval of the UN’s Philip Aston, who said the verdict was a “clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights.”
Mr Aston said the ruling established a “strong legal precedent for others to follow.”
“This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added.
The post #Privacy: Dutch court says welfare fraud surveillance infringes human rights appeared first on PrivSec Report.