Cambridge Analytica – who’s to blame?


Posted By on 2/05/18 at 3:52 PM

By Naomi Stephens (Paralegal) and Amelia Edwards (Lawyer)

The headlines are screaming, the politicians are pontificating: the Cambridge Analytica data breach scandal has turned the spotlight on Big Bad Data, and through all the huffing and puffing that has followed the unshaken public consensus seems to be that Facebook has, by negligence or design, thrown vulnerable users to the wolves.

We seem to have turned a new page on data security, but what is the real story here?

It’s no secret that Facebook collects enormous volumes of data about users, which it provides to advertisers and app providers both for financial gain and to provide functionality and interoperability that benefits users. We like to think that most users of free platforms and apps understand that what they are really engaged in is a barter system; they bargain no-cost enjoyment for targeted advertising. What some people appear to only just have realised, however, is the true value of what they’re giving away – app providers don’t want your data just so they can personalise your user experience or give you desired functionality, they want your data (and your friends’ data, and whatever they can get their hands on) because it has real-world value in the digital economy.  But still, there’s nothing shady here – it’s all disclosed in the terms of use which users are required to accept when creating an account.  Right?

Enter Cambridge Analytica: a researcher creates a Facebook quiz app that collects not only the data of people who completed the quiz, but (through savvy use of a once more generous Facebook API) their friend connections too.  The pay-off: the personal data of 87 million Facebook users, world-wide.  The Facebook app terms prohibit the selling of data collected via the API, and Cambridge Analytica breached those terms when that data was sold-on to political campaign groups.  But the initial collection of that data, and Facebook’s corresponding disclosure, was authorised by each user and, generally speaking, completely above board. Just because users made a bad bargain doesn’t mean that Facebook did anything wrong, but when the public wants a bad guy it’s easier to blame the big guy than to try to understand what actually went on.

As this tale has unfolded, Zuckerberg’s tone has markedly changed: from initial firm denials of responsibility to ultimate abject apology.

But does Facebook have anything to be sorry about?

Privacy and Facebook – the legal side

Cambridge Analytica Facebook data breach

Privacy laws differ from country to country (and state to state, in the US’s case), but the overriding principle is this: consent is key.  Facebook’s terms of use state that it can use whatever content you share with the platform.  Users can adjust privacy settings to keep certain information private, but the purpose of the platform is to publish and share information and users must consent to that disclosure for the system to function.  Where the use of personal information is obviously essential to the delivery of goods or services requested by a consumer (including free use of an app), consent will usually be implied even where not expressly given.

Legally, Facebook’s terms of use are pretty solid.  Though they arguably could and should have done more to bring the scope of the disclosure to the attention of the user (which they now action much more rigorously), the what, how, when, and who of data use and disclosure was always there in the T&Cs for the reading.  And that was exactly Facebook’s response when the story first broke – sorry, not our problem.

The moral of the story

Of course, Cambridge Analytica has turned out to be a huge problem for Facebook – the hit its public reputation has taken as a result of the scandal, evidenced by the #deletefacebook viral campaign, has demonstrated once again the awesome flagellating power of social media as the modern arbiter of public opinion.

And so, the moral of the story is this:

  • Even if you’re complying with privacy laws and have a published privacy policy which clearly sets out your data collection and use activities, it’s vital to factor-in reputational risk when formulating policies around data collection, use, and disclosure (and just how far you need to go to bring that activity to the attention of the user).
  • Conduct a cost benefit analysis, weighing up the projected value in the activity, and transparency, against the reputational risk.
  • Consult a PR expert and a legal expert, and make an informed choice regarding relative risk and industries standards and have a plan in place for if and when things do go wrong.

If you’re looking for some fun in the meantime, watch the videos of Zuckerberg teaching several of the not‑so‑technically minded senators how the internet works. Good for a giggle.

Amelia Edwards

Amelia Edwards Special Counsel

Since joining us in 2014, Amelia has developed broad commercial experience and interests working across the firm’s practice areas and on secondment. She holds a Bachelor of Arts (Anthropology)... Read More