- Key issues
How effective were regulations after Cambridge Analytica?
By Klara Lee
Thu 17 September 2020
The Cambridge Analytica scandal revealed how easy it was for political campaigns to use social media data to target and manipulate people into voting a certain way. GDPR and other regulations quickly came into place to make sure this never happened again.
In this article we’ll look at exactly how GDPR and other regulation has tried to address the problems revealed by Cambridge Analytica, and ask: how effective they have really been?
Refresher: what was Cambridge Analytica?
In 2017, whistle-blower Christopher Whylie revealed to journalists that Facebook was selling the personal data of up to 87 million of its users to Cambridge Analytica - a political consulting firm that used the personal data to help influence the outcome of the 2016 election.
The data collected included users’:
- educational background
- political beliefs
- relationship status
- how often they were online
- and much more
This was then used to profile users. In particular, to identify those who were ‘undecided’ about their vote, and personalise and micro-target them with messages to change their opinion about Donald Trump.
According to Christopher Whylie, Facebook was fully aware that Cambridge Analytica had exploited its users’ data, but did not inform the affected individuals.
What were the immediate consequences of Cambridge Analytica?
In 2019 in America, the Federal Trade Commission issued Facebook a record-breaking $5 billion penalty, and in the UK, the Information Commissioner’s Office (ICO) fined Facebook £500,000 (the maximum fine that could be issued at the time).
The scandal also showed many voters how their data was being used for political manipulation and the trust in social media platforms fell. Facebook lost an estimate of 15 million US users since 2017 and 57.9% of the users felt no trust towards the platform.
And perhaps most significantly, the EU put in place the General Data Protection Regulation (GDPR) in 2018. The EU wanted a regulation that would address and prevent the problems revealed by the scandal. Overall, the EU wanted a regulation that could be adopted consistently throughout Europe, enhance data transfer rules outside of the EU and give individuals greater control over their personal data.
How effective was increasing maximum fines?
The first way GDPR addressed this was to increase the maximum fine allowed under GDPR to 4% of a company’s annual global turnover or €20 million, whichever is greater.
If GDPR had been in place during Cambridge Analytica, instead of being fined £500,000 by the ICO, it would have been fined 4% of its worldwide annual turnover, which would have been £315 million. But, critics argue that these fines are still too low.
Smuda (2013) concluded that in 67% of European cases, the ultimate fines and restrictions did not outweigh the gains from price-fixing.
How effective was clarifying the term ‘personal data’?
GDPR also expanded and clarified the term ‘personal data’ to include the type of data yet to be collected by machines in the future. In comparison to the previous regulations in place (Data Protection Act 1998), that didn’t include information like IP addresses and GPS identifiers in the definition, GDPR defined personal data as any information that could directly or indirectly identify a person, including physical, genetic, mental, economic, cultural or social identity.
If GDPR had been in place during the scandal, there would have been a clear understanding of what constitutes personal data and a lot of the data may not have been collected and misused.
But, Gabriela Karaulanova, writer for Solon Law, argues that inferred data should also be protected under the GDPR as ‘personal data’ despite it not being factual. The definition of ‘personal data’ should include data produced through a complex method of analytics used to categorise people; the method used during Cambridge Analytica. This would ensure maximum protection of individuals, particularly during political elections.
How effective was setting conditions for processing personal data?
Pre-GDPR, companies could gain a user’s ‘consent’ to start legally processing personal data, but often users didn’t really know what they were consenting to- according to a survey, only 3% of people aged 18-34 read the terms of conditions before accepting.
The GDPR addressed this by setting out stricter conditions under which data can be lawfully processed. Now you have to identify 1 of 6 conditions; consent, contract, legal obligation, vital interests, public task and legitimate interests. Gaining ‘consent’ also has higher standards, for example, consent should be given as an opt-in, not as a pre-checked box.
To address this, Karaulanova suggests that future lawmakers implement a ‘clear and unambiguous framework which must be strictly followed by all Member States...this would ensure that companies obey the law to warrant consistency in its application’.
Plus, even though gaining ‘consent’ has higher standards under GDPR, Karaulanova argues that this ‘places the burden unreasonably on the individual to educate themselves to understand the risks associated with sharing their data online’. This is made harder because enforcing well-written privacy policies for every website and for every company is an impossible task. Privacy policies are often ‘excessively long and complex’, and are still not read or understood by most users.
How effectively is GDPR being enforced?
Today, the majority of companies are still not being fined for failing to protect their customers’ data. Possibly because 600,000 cases have been reported across EU Member States, leaving regulators overwhelmed and unable to address all the notified breaches quickly.
Although regulators will be able to catch up with these reported cases eventually, Karaulanova emphasises that the main issue of amending some weaknesses in GDPR legislation still needs to be addressed, otherwise companies will always be able to ‘find loopholes and the user’s data will be at risk of misuse’, especially in the context of political manipulation.
Has GDPR prevented the use of data for political manipulation?
It’s only been two years since the enforcement of GDPR, so it’s still too soon to know whether regulations have fully prevented psychometric election advertising by political parties and data analytics companies.
But, it’s suspected by many that it still goes on. The ICO proposed that the government should implement a statutory code of practice which highlights the use of personal information in political campaigning and applies to all data controllers who process personal data for this purpose. This would help users make an informed decision, based on transparent and lawful information, when giving their consent for the use of their data in political campaigns.
GDPR is a huge step towards stronger privacy protections, however, there are still companies who get away with limited or no consequences for their failure to protect the personal data of their customers.
By implementing some changes to GDPR, such as the inclusion of ‘inferred data’ in the definition of ‘personal data’, higher maximum fines and the introduction of separate codes of practice to solely address the use of data for political campaigning, GDPR might be able to address some of its limitations.
As many of us have heard before, the use of data has become a vital part of today’s society- data has replaced oil as the world’s most valuable resource. So the chance of personal data being misused is high, and therefore individuals need to be protected by legislation that may have to constantly be developed to be the most effective.