Home GDPR #Privacy: How to make the face fit: facial recognition and GDPR
GDPR - December 23, 2019

#Privacy: How to make the face fit: facial recognition and GDPR

By Andy Bridges, Data Quality and Governance Manager at REaD Group.

In the era of responsible data stewardship and enhanced consumer rights, how does facial recognition fit with the GDPR principles of transparency, Privacy by Design and Privacy by Default?

As was revealed in August 2019, the use of facial recognition is everywhere, and has been for some time. The revelation that many private companies have been secretly using the technology raises many questions: legal, ethical and political.

It is clear that facial recognition technology needs to be closely regulated, but this doesn’t seem to be the case at the moment. Applying a legal basis for processing this technology needs to consider fundamental rights, privacy and ethical responsibilities.

Facial recognition requires a clear definition of rules and legal processes which must be followed to ensure this type of data is regulated correctly. How accurate facial recognition is in practice – and the dangers of inbuilt bias – also need to be considered.

Is it legal?

In Britain, there is no law that gives the police the power to use facial recognition and no government policy on its use. Having said that, facial recognition techniques have been used by the Metropolitan and South Wales Police forces on a trial basis since 2016, while the Financial Times reported that facial recognition technology is currently being used in King’s Cross and may be deployed in Canary Wharf.

The Telegraph has also just reported that AI and facial recognition technology are being used for the first time in job interviews in the UK to identify the best candidates, raising questions around in-built biases that could discriminate against some candidates and exclude talented applicants who might not conform to the norm.

While facial recognition technology is not illegal (think how willingly and unthinkingly we allow Facebook, Google, airports and even our own smartphones to catalogue and record our faces), the difference is consent. The problem is that are no laws, policies or guidelines to govern its use. The processing of the data it captures, on the other hand, is where GDPR principles can be applied.

The Information Commissioner’s Office (ICO), commenting on the use of live facial recognition technology in King’s Cross, said:

“Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding.”

The Metropolitan Police have self-imposed conditions on its use to ensure compliance with the Human Rights Act 1998, in particular the right to respect for private and family life (Article 8). Facial recognition will only be used if the following conditions are met:

  • The overall benefits to public safety must be great enough to outweigh any potential public distrust in the technology;

  • It can be evidenced that using the technology will not generate gender or racial bias in policing operations;

  • Each deployment must be assessed and authorised to ensure that it is both necessary and proportionate for a specific policing purpose and surely a lawful basis has to be determined on a case-by-case basis;

  • Operators are trained to understand the risks associated with use of the software and understand they are accountable;

  • Both the MPS and the Mayor’s Office for Policing and Crime develop strict guidelines to ensure that deployments balance the benefits of this technology with the potential intrusion on the public.

Because there are no governmental laws, policies or guidelines on the use of facial recognition in a public place, appealing a decision to install or use facial recognition cameras; suing the police if incorrectly identified as a suspect; or covering one’s face when approaching a face recognition camera located in a public space are, respectively, reliant on ethical guidelines; reliant on the ICO and legal advice; or subject to the police’s power to require removal of facial. In all cases, there’s no legal protection for members of the public.

The ICO has confirmed that they are going to prioritise regulations for the use of surveillance and facial recognition technology in the next year.

Balancing the rights of consumers

As new technology emerges, and sensitive personal information is collected, it must be balanced against consumers’ legal rights. The use of this new technology is indirectly regulated by the Data Protection Act 2018, in relation to the images that are gathered and how they are handled; the Protection of Freedoms Act 2012, which has sections relating a code of conduct regarding security surveillance devices; and the Human Rights Act. The pace of technological change is perhaps faster than the pace of legal change in this area.

When using facial recognition, what must be taken into consideration is the overall benefits to public safety. These must outweigh consumers’ distrust in technology of this kind. In addition, each time facial recognition is used, it must be assessed to ensure it is proportionate, and that fundamental rights have been considered, balanced and fit to a lawfulness of processing under the Data Protection Act 2018.

There are several lawful bases to choose and these should be considered in the form of a Data Protection Impact Assessment (DPIA), which must be conducted if a processing operation could result in a ‘high risk’ to the rights and freedoms of individuals.

Some examples of the types of conditions that would require a DPIA include:

  • When using new technologies

  • When tracking people’s location or behaviour

  • When systematically monitoring a publicly-accessible place on a large scale

  • When processing personal data related to “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation”

  • When data processing is used to make automated decisions about people that could have legal (or similarly significant) effects

  • When processing children’s data

Lawful basis

This then raises the questions of which lawful basis is the most appropriate, and how facial data is collected, managed, recorded and protected, and it must be determined before facial recognition is used: it can’t be a retrospective exercise.

Data protection by design and by default is the GDPR requirement to ensure that the appropriate technical and organisational measures to implement the data protection principles and safeguard individual rights are put in place. This means that organisations have to integrate data protection into data processing activities and business practices, from the design stage right through the lifecycle.

Previously known as ‘privacy by design’, it is now a legal requirement under GDPR. Data protection by design is about considering data protection and privacy issues upfront. It can help to ensure compliance with the GDPR’s fundamental principles and requirements, and forms part of the focus on accountability.

In a world where the consumer is increasingly well informed about the use of their data, the use of facial recognition technology poses many more questions than it answers. The shaping of the regulatory landscape needs to ensure that all new technologies have carefully considered and assessed both the fundamental and human rights of individuals. Applying the legal basis for processing for this technology is, as we have seen, a practical and moral minefield. How that will be managed remains to be seen.

About the author

Andrew Bridges joined REaD Group as Data Quality and Governance Manager in 2016 to spearhead the company’s commitment to providing industry leading standards of data quality and governance. A key part of Andrew’s remit is ensuring REaD Group remains at the forefront of the EU regulatory landscape, in particular gearing up for the introduction of the new General Data Protection Regulation.

REaD Group is a marketing data and insight company that uses its unrivalled data products, insight and expertise to helps its clients get closer to their customers, offering market-leading data quality and cleaning solutions and trusted marketing data.

The post #Privacy: How to make the face fit: facial recognition and GDPR appeared first on PrivSec Report.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

Getting employees invested: Overcoming complacency to emphasize security

Your employees are the key to smarter security. Learn how you can re-establish company sec…