Disinformation, extreme content, hate speech, content that violates local law, the manipulation of content and subsequently voters, are global concerns that are reflected in the Indian context. Policymakers, companies, and other stakeholders are seeking solutions to these challenges.
Efforts by policymakers appear to focus on increasing regulation of content, making identification and traceability of users easier, and holding companies responsible for the content on their platforms. In late 2018, the Union Ministry of Electronics and Information Technology proposed amendments to India’s Information Technology regime that included traceability and proactive filtering requirements. The Election Commission of India has established media certification and monitoring committees, required all political advertising to be pre-certified, and required companies to have in place grievance officers and enable expedient handling of any violations of guidelines. Last year, the Press Information Bureau of India proposed, and later retracted, guidelines that would suspend a journalist’s accreditation for creating or propagating fake news.
Company efforts – including from Facebook, Twitter, WhatsApp, and Google – have focused on bringing transparency to aspects of political content, verifying political advertisers, providing transparency in expenditure on political advertisements, more stringently monitoring content on their platforms, responding to government requests, including from the Election Commission, raising awareness around disinformation and fake news, building capacity amongst politicians, and verifying content on their platforms. Such steps have been aligned to local law and policy.
Though many of these steps can address some of the concerns around content and the impact on elections, there are concerns that they can also put individual freedoms at risk if framed and implemented in a heavy-handed and blanket approach.
While this article focuses on online content and its ecosystem, it also looks at its intersection with elections. During the 2014 general elections in India, social media was seen as a game changer. In 2019, these platforms have been seen as potential tools for voter manipulation and interference.
The ecosystem around the creation of content, including that related to elections, as well as its dissemination, promotion and consumption is complex. To start to understand the effectiveness of measures taken, and identify what needs the most attention, it is useful to unpack this ecosystem.
Actors and their roles
There are a number of different actors involved in facilitating the dissemination of content: traditional media outlets, broadcasting companies, social media platforms, internet companies, content sharing platforms, messaging platforms, social opinion aggregation and discussion platforms, and search engines.
Importantly, the purpose and roles of these companies are changing and they are being used in ways initially not intended. Platforms such as WhatsApp or Facebook, initially meant to facilitate one-to-one and peer-to-peer communication, are being used as broadcasting platforms.
These platforms and companies have recognised this shift. For example, in response to the violence that resulted from the viral spread of rumors on the platform, and the decision to limit forward messaging to five chats in India, WhatsApp stated “We believe that these changes – which we’ll continue to evaluate – will help keep WhatsApp the way it was designed to be: a private messaging app.”
Creators and content type
Content, including political content, can be created by a range of actors: individuals, journalists, the government, and political parties. The type of content also varies – from fact-based media coverage, to opinions, interviews and personal interactions, to propaganda and political manifestos.
The content can be in the form of posts, tweets, videos, messages, articles, op-eds and so on. While some information could be illegal for political parties or candidates to disseminate, according to electoral laws, the same content generated by private individuals and shared privately might not attract similar restrictions.
This makes the regulation of content, and during elections - the job of the Election Commission more difficult as requirements imposed by the poll body may not extend to all types of creators or content.
As the roles of different actors are merging, so is the type of content. Verified fact, opinion, legitimate political advertising, manifestos, and misinformation and propaganda are merging into one indistinguishable environment of disseminated information.
Content dissemination and consumption
The way in which content is disseminated on social media has been characterised by its ease and speed and personalised nature. Commonly referred to as the “filter bubble”, platforms use personalisation algorithms to push information to users based on a set of parameters and interactions such as likes, past content, friends, tweets and retweets resulting in the display of content that matches user preferences.
Thus, as pointed out by Tracking Exposed – an initiative to put the spotlight on the tracking and profiling of social media users by social media companies – “The design of personalisation algorithms is not just a technical matter, but a political one”. Personalisation algorithms are not just technical interventions made to optimise platform experience and targeted advertisements, but can enable political, ideological and other societal bias. Social media has also been noted as fuelling reactive engagement – where individuals skim content, click, and share within networked communities and social circles.
Further, the ability to ascertain the verity of content is complicated by the information overload that individuals experience, further enabling misinformation and biases. For elections, this means that politicians and parties are able to narrowly target content to potential voters on social media, which is subsequently shared across friends and networks creating an information bubble.
A number of steps taken by Facebook and Twitter have focused on verifying and bringing transparency to political advertisements. Experiences from the US have taught us that targeted advertising shapes news consumption and allows for influence by actors wanting to spread propaganda.
Conclusion
Though many of the steps by companies and policymakers seek to address some of the pain points associated with content – they do not address all the challenges surrounding the generation, dissemination, and consumption of content and funding associated with it on social media.
The present framework creates challenges due to function creep on platforms, changing roles and uses of platforms, infusion of advertising money, personalised dissemination and consumption that allows for formation of echo chambers and virality.
Without contextually accurate understandings of content creation, dissemination and consumption, and more fundamental changes to these processes – policymakers and companies will always be looking for the next band-aid to fix the challenges posed by technology in elections and democracies around the world. The changes needed should reflect not only aspects that policymakers and companies can adopt but also address technological as well as societal realities including biases.
Creative solutions need to be explored to confirm online spaces do not get captured by groups with specific interests that conflict with societal harmony and national concern. While the misuse of these platforms for gains and electoral victories could help political parties and their supporters in the short term, it is important to be aware that it can also have a negative impact on public trust and create hyper-partisanism or exacerbate divisions in a country which has rightfully prided itself on pluralism.
Elonnai Hickok is Chief Operating Officer, The Centre for Internet and Society, India.
This is the sixth part of a series on tackling online extreme speech. Read the complete series here,