Were you aware that data privacy regulations like General Data Protection Regulation (GDPR) may have inadvertently created a security vulnerability?
GDPR and other “privacy by design” laws, established to empower individuals to claim back control over their data and protect their identities, have allowed loopholes to occur – cybercriminals can easily take advantage of these to tap into your valuable (and personal) data without you noticing.
Flashback to one year ago, the majority of companies were anxious about whether they would have the appropriate data protection processes set up to ensure they were GDPR compliant. However, at Black Hat USA earlier this year James Pavur, an Oxford-based researcher, illustrated that some organisations are now discovering that the regulation may have unintentionally built a huge security vulnerability.
Since the introduction of the legislation, Mr Pavur has contacted dozens of UK- and US-based companies to simulate how they would handle a “right of access” request made in another person’s name. To his surprise, approximately one in four firms shared his partner’s personal information without consent, after he made a false demand for the data by citing GDPR.
This therefore begs the question, is “data protection” a bit of a misnomer as it doesn’t actually “protect” our data? It merely creates a more transparent system – so, perhaps a more accurate acronym for this policy would be “GDTR”. Under the guidelines, organisations are urged to hand over data quickly without charging the data subject. In reality, the issue is proving the person requesting the info is really who they say they are.
Speaking from experience this could manifest in a number of ways. The organisation possessing your private data is targeted by cybercriminals – a healthcare company or a bank – gets a request for access to your details from an individual, who they assume is you.
Before they even share the data, they must confirm that the individual is actually you. Unfortunately, cybercriminals are able to easily mimic a person by answering common security questions (in our oversharing society, much of this data can be found on social networks).
They have even proven successful in manipulating more advanced methods that easily decode two-factor authentication (2FA), which can then be easily bypassed or be duped using SIM-swap or call-divert fraud. Through exploiting this loophole and some nifty digital skills, the fraudster fools the firm into thinking they are actually you and the data is released.
They would argue that they have acted as described and recommended in the law and are none-the-wiser, however they have just handed over a customer’s personal information that can be easily sold on the black market.
This loophole exists because while GDPR does advise on identity verification, it doesn’t mandate how, which leaves this open to interpretation and ultimately vulnerability. This isn’t a scenario experienced only in the UK; California has passed the Consumer Privacy Act of 2018, Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), or any other laws on the books today are facing something similar.
However, analytics such as behavioural biometrics, location patterns, and device fingerprinting can be leveraged by organisations to detect anomalies to ensure people are who they say they are, so that confidential information isn’t mistakenly leaked.
In terms of GDPR, there are several measures companies need to take before they become truly compliant. Yet, it can be difficult to navigate a complicated legislation, which doesn’t always provide a roadmap for how to actually achieve compliance. Security teams can take advantage of Policy Management software solutions to create a suitable identification user experience, whilst also having the option to pick and choose the most relevant authentication methods for their customers.
Organisations must have a verification policy in place to confirm that the person requesting the details is who they say they are, before handing over the data. For instance, if a customer doesn’t want to identify themselves via a biometric functionality on their smart phone, they will have an alternative choice without compromising on security.
The policy would help the company to avoid collecting additional Personal Identifiable Information (PII) data when performing the identity check, which would create a massive GDPR headache. Instead, businesses should learn to make the most out of the data they currently own in a smart way.
Broadly speaking, the right to restriction and granular consent is extremely complex and costly for companies to handle when it comes to authentication. Most existing infrastructures haven’t been built that way. Since the introduction of GDPR, none of our data indicates that consumers are worrying more about their cybersecurity.
Although, they are certainly more aware of it being collected and how it might be used. Anecdotally, we do find that data collection is now causing organisations increased friction and headaches to find the right balance between delivering frictionless services, or experiences, and meeting GDPR requirements.
Therefore, it is fundamental that companies are implementing policies to make sure that they’re handling customer data in a truly GDPR compliant manner and avoiding loopholes. Don’t fall into the trap of thinking that adherence to the regulation equates to a lowered customer experience. Callsign has seen an increase in the number of organisations giving consumers added choice in how they authenticate and interact online, in terms of requests for information (RFIs), however there is still a way to go until businesses make a real commitment to better privacy. The one thing to always remember is use your commitment to privacy as a differentiator, rather than a tickbox.
By Sarah Whipp, CMO and Head of Go To Market Strategy, Callsign.