A push for algorithmic accountability and transparency is growing steadily in the era of Black Lives Matter, but privacy experts are asking an important question. Even if predictive technologies were no longer biased, do we want them?
In this month’s Last Thursday in Privacy event, privacy experts discussed how negative characteristics of social life, such as discrimination, have been replicated in artificial technology and “weaponised” against black communities.
Experts argued that society must begin by rejecting technological solutions as the default response to social challenges. “The response to a political artefact cannot be just technological”, says Ivana Bartoletti, Technical Director of Privacy at Deloitte.
Aidan Peppin, researcher at Ada Lovelace said: “These issues shouldn’t be mitigated as part of the process; they have to be embedded within design principles. You have to think of discrimination at that point.” Attempting to audit bias in AI only after it has been found is not enough, he said.
AI technology is impacting people in ways that we do not fully understand, said Nero Ughwujabo, former Special Advisor to the Prime Minister. As a result, it is crucial to make an immense effort to talk to the communities affected by biased technologies.
He added that both a global framework for developing artificial decision-making systems and local-level policy is needed.
The panel featured Ivana Bartoletti, Technical Director of Privacy at Deloitte, Aidan Peppin, researcher at Ada Lovelace, Nero Ughwujabo, former Special Advisor to the Prime Minister on Social Justice, Young People and Opportunity and Alison Gardener, lecturer in Data Science at Keele University . To watch the webinar on demand, click here. The next Last Thursday in Privacy is on 30 July – register your place here.
The post “Even if AI was rid of bias, do we want it?”, asks privacy experts appeared first on PrivSec Report.