Just 46 days after revelations that Cambridge Analytica harvested millions of Facebook profiles to predict and influence voters in the American presidential election and Brexit referendum, the data analytics company has announced it will close. So too will its parent company, SCL Elections.
Some may see this as a victory in the fight against covert or unacceptable means of data collection. But there is little to celebrate.
The closure follows a massive backlash against the company’s data collection activities. Announcing that it would commence insolvency proceedings, Cambridge Analytica said, “Despite Cambridge Analytica’s unwavering confidence that its employees have acted ethically and lawfully…the siege of media coverage has driven away virtually all of the company’s customers and suppliers.”
If anything, the furore surrounding Cambridge Analytica has only served to strengthen the distinction between the idea of covert data collection versus data collection that is seen as legitimate and acceptable. The routine gathering and monetisation of vast amounts of personal data – currently undertaken on a daily basis by various actors and digital platforms especially – has been normalised.
Legitimate means of data collection hinge upon the idea that users are seen to have explicitly allowed it by accepting the terms and conditions of service. But how many users understand what they sign up for when they tick the terms and conditions box? These agreements are characterised by their extreme complexity and vast length (it takes about nine hours to read the 73,198 words of Amazon Kindle’s).
Then there is the fact that some entities legally gather data even on non-users – people who have not signed the terms of service. As Mark Zuckerberg recently admitted to the United States Congress: Facebook routinely gathers data on non-members and, remarkably, the only way for non-members to remove the data gathered on them is to join Facebook.
There is also a whole range of other questionable means that are used to collect our data, but which are not seen as problematic. For instance, we often have to opt out of having our data collected, instead of getting to opt into it. And there is the practice of our data being collected even after we log out of services.
So what are the conditions that render “legitimate data collection” possible, and what are its broader societal consequences?
We live in a world marked by the rise of data as a commodity, which is capitalised on by platforms that generate revenue from it. They generate a lot of revenue. Facebook made $40 billion in revenues in 2017. Google, which also makes its money through selling user data to advertisers, made $109 billion in 2017.
The outcry against Cambridge Analytica has not attempted to sanction, nor even to question, the existence of digital platforms and other actors which depend on the ever more extensive acquisition and monetisation of personal data. If anything, the Cambridge Analytica story has unintentionally contributed to the further normalisation of surveillance and the lack of privacy that comes with being an internet user nowadays.
Even the web pages of the sites that broke the story, The Observer and The New York Times, allow dozens of third-party sites to obtain data from the browser of the user accessing the articles. It was 75 and 61 sites, respectively, the last time I checked using Firefox’s Lightbeam extension.
Platform capital is the problem
Many commentators have pointed to this new era of “surveillance capitalism” as the problem. But these arguments imply that capitalism without surveillance is not only possible but that it existed before the advent of new technology.
Yet surveillance has been absolutely fundamental to the functioning of capitalism from the start. Producers have always needed to gather some information about the nature of their markets, their suppliers of inputs, and about the economy in general. Surveillance has also been central to the wage-labour relationship. Employees are closely supervised and monitored to ensure the time they work matches up to the time for which they are paid.
Surveillance plays a much bigger role today. But the real issue now is the rise of platform capital – its growing importance in some sectors, outright dominance in others, particularly in social media and internet search, and the fact that this requires constant collection of personal data.
There are those who do see overt data collection as a problem (most notably civil society organisations such as Privacy International, Electronic Foundation Frontier or Tactical Technology Collective) and are campaigning for measures to protect people against these constant invasions of privacy. Plus, there are attempts by policymakers to introduce more stringent privacy rules, such as the European Union’s GDPR.
But this remains a herculean task. Those that would like to stop or curtail the way personal data is collected are up against multi-billion dollar businesses whose profits directly depend on it. Hence, when Zuckerberg was asked about extending the GDPR’s privacy rules outside of Europe, he gave the vague response that Facebook is “still nailing down details on this, but it should directionally be, in spirit, the whole thing”.
The growing share of platform capital in various economies means that erecting any major obstacles to its operation will be bad for their income and for the economies concerned. Recent evidence suggests that the EU data privacy regime is already having a notable negative economic impact. So we run the risk that digital platforms are becoming too big to fail in the eyes of regulators.
If citizens wish to regain (at least some) control over the data that concerns their private lives, they need to challenge platform capital as a whole – and not just individual actors such as Cambridge Analytica. The struggle to do so, however, will only get more difficult, as the dominance of platform capital keeps growing.
Ivan Manokha is departmental lecturer in International Political Economy, University of Oxford.
This article first appeared on The Conversation.