The Cambridge Analytica controversy is unlikely to die down soon as it touches a series of important democratic nerves. First, it concerns our ability to freely exercise our civic rights and duties. The fear of manipulation through data casts a cloud of suspicion over the sanctity of elections, which in turn shakes our belief in the legitimacy of the entire democratic system. Many democracies now fear that elections can be “stolen” through targeted manipulation.
Second, the extent of the data mining by social media platforms and the opacity with which this collected information is shared with third party organisations for a variety of interested purposes heighten these anxieties. Social media platforms have a responsibility and moral duty to be more transparent about how they use the data instead of hiding behind legalities and technicalities that in effect exempt them from making any effort towards clarification.
The ongoing controversy involves a chain of finger-pointing that would be amusing had the matter been less serious: Facebook claims Aleksandr Kogan, a researcher at Cambridge University in the United Kingdom, collected data in violation of its terms of service; the researcher says he sold the data but had no idea what nefarious purposes it would be used for; and Cambridge Analytica (or at least Christopher Wylie, the whistleblower who worked there) says it simply used the data it bought.
One could pick one’s favourite scapegoat in this chain, but it is important to keep our attention focused on the heart of the problem. Regardless of the perniciousness of the ways in which personal data is being harvested by private companies, social media users are responsible for the type and amount of information they willingly give up online. Users are perfectly willing to post bits of information about themselves to a loosely defined group of “friends”, allow digital applications and services to track them all day, or to publicly tweet about their likes and dislikes. While the users’ attitude may be “it’s just a trivial detail, who could possibly care?”, these ostensibly “trivial details” reveal much of their personalities and render them vulnerable to targeting and even subtle coercion.
Data mining is hardly a new practice. Market survey and advertising companies have been gathering consumer data for decades – mostly through consumer practices, credit card usage and such – with the aim of targeting customers with greater precision. However, what is new is the granularity and scale of personal – if not intimate – data that is available through social media and mobile apps. This data is further used to make subjective judgements about individuals, including psychological assessments, as happened in the case of Cambridge Analytica. The prevailing situation is equivalent to private players putting up cameras and microphones in a public square (one with billions of people) and doing whatever they want with the information they gather.
Old practice
Governments entered the data mining game in the late 1990s with the rise of international terrorism. Political parties followed suit and adopted the same technologies in the 2000s, again as an instrument of targeted communication. But the fact that political parties seek to gain advantage from these new mines of information should not spark outrage. Nor should the logical extension that they are trying to use this data not just to gain information about people’s political preferences but to attempt to mould those preferences. Historically, electoral politics has always been consubstantial with all manner of manipulation of the public. In his Political Dictionary, William Safire reminds us that by the 1950s, the term “spin” had already become synonymous with deception.
In the Indian context, parties have been using polling booth data for years – data secured through legal and at times extra-legal channels. This data is used to “read” the electoral map at its most granular level. It is also often used to reward or punish residents of certain localities depending on whom they voted for in the previous election. Parties and individual political actors have recently come to increasingly rely on political data extracted from the web and other electronic sources, such as phone records. They are using this data to “map” constituencies and target their audience for both public and covert forms of political communication. This has created a cottage industry of political consultants who lure their customers with the promise of enhancing their electoral strategies with science-backed solutions.
A major problem with such data work, however, is that it tends to make scientific claims while bypassing the rules that govern normal scientific activities: notably, the ethical rules governing scientific work and the necessity of disclosing one’s research objectives to the public. Then come the rules of verifiability, accountability and transparency, which demand that researchers disclose their data and the methods they have used to ensure they are not making mistakes with the treatment of their data, or, more prosaically, simply making stuff up. Finally, there is the important matter of informed consent, which we can safely assume is being ignored in this context.
Given that political data work is today conducted in a deliberately opaque environment, literally anyone can make any claim without having to substantiate it. There is also a context of data fetishism in which the simple recital of numbers works as a substitute for proof. In a 2014 essay, Tim Harford notes that the belief that numbers speak for themselves is more an article of faith than a demonstrated truth.
Given the circular finger-pointing, it is clear that the problem will not be solved on its own in a largely unregulated environment. In India more than in most countries with large populations of social media users, there is a crying need for rules enhancing the protection of people’s data, for enacting and expanding their privacy rights and for regulating how private companies use the data they harvest, notably the conditions under which the data can be sold to third parties. In the absence of such a protective legal framework, targeting political actors for adopting intrusive data-driven electoral strategies is completely futile.
Gilles Verniers and Sudheendra Hangal teach politics and computer science, respectively, at Ashoka University, and are co-directors of the Trivedi Centre for Political Data.