As a platform for private messaging, WhatsApp enables the circulation of extreme speech within particular interest groups, and in the format of text, memes and videos that can be forwarded to a user’s contacts.
Following researchers Matti Pohjonen and Sahana Udupa, we define online extreme speech as digital acts that seek to polarise opinion, and which may be articulated along the political spectrum. This may take the form of provocative or offensive messaging targeted at individuals or communities that transgresses the conventional lines of civil political discourse.
Unlike public platforms like Twitter and Facebook, WhatsApp’s private, “living room” style format makes it harder to police, trace or report extreme speech, and also shifts the boundaries of civility given the insularity of interest groups.
The nature of closed groups, in which users may be linked to family, friends and work colleagues, as well as party political actors and strangers, also informs the environment in which people receive and interpret extreme speech, and shapes their decisions about what content to circulate (or not), to which group chats, and when.
Extreme speech is intended to be emotive, it provokes people to act on their feelings, and lends itself to cultivating ethno-nationalist identities. In the run up to the 2019 general elections in India, political parties have been mobilising WhatsApp to circulate extreme speech for political gain, in ways that often blur the truth and delight in abuse.
Extreme speech targets
Parties use extreme speech to target Opposition party members and propagate nationalist sentiment. While all political parties now organise and campaign on WhatsApp, the ruling Bharatiya Janata Party has the most well-worked out network of social media workers (as well as self-styled online volunteers) to create and circulate content on the platform.
Extreme speech thrives on political discord and social tensions where political organisations can gain from further polarising opinion. In the wake of the Pulwama attacks and India’s alleged retaliatory attack on terror camps in Balakot in Pakistan, in February, images that spread disinformation as well as inflammatory memes and videos invoked jingoistic nationalist themes and anti-Pakistan sentiments.
Rhetorical appeals to Indian nationhood and the labelling of certain groups as anti-national on WhatsApp worked well to support the BJP government’s political agenda, and framed those who do not vote for Narendra Modi as traitors.
Impact on daily life
What impact does sharing extreme speech via WhatsApp have on everyday life? Our ongoing research in North India on the use of WhatsApp for political messaging shows that the circulation of digital content characterised as “extreme speech” informs everyday public opinion.
Take the example of a university student in rural North India, who pointed to how children in his village, who did not even have their own smartphones, levelled abusive language at Pakistan using phrases learned from an anti-Pakistan video that was circulated after the suicide bombing in Pulwama, South Kashmir, in February.
Another student directly attributed the influence of extreme messages on social media in inspiring students to take to the streets in the name of nationalism after the Pulwama attack.
The repetition of certain extreme speech acts in digital spaces is such that the messaging they evoke risks becoming banal and acceptable. The images of Modi dropping bombs on Pakistan cultivated an image of him as a strongman leader, with slogans like “Modi hai toh mumkin hai”, or, “With Modi, it is possible”.
A forward on WhatsApp groups that presented a spurious survey that allegedly showed 77% of people to be in favour of publicly beating up anti-national journalists also gained credibility through its ubiquity.
Digital media plays a role in enabling extreme speech due to factors such as virality, the personalised nature of circulating messages, and a wider reach in a shorter time.
Additionally, with respect to WhatsApp, receiving messages from personal contacts such as family, friends and neighbours reinforces the message and sentiment of the content, it does not matter whether it is true or not. In India, the use of WhatsApp for the spread of misinformation has led to brutal acts of violence and lynchings, characterised by the media as “WhatsApp murders”.
In 2018, the Government of India called for WhatsApp to take responsibility for the violence by handing over the location and identity of those sending provocative messaging to law enforcement agencies. Eager to avoid state regulation and protect the platform’s end-to-end encryption, WhatsApp tagged “forwarded” messages, limited the number of forwards to five per message, and more recently launched a fact-check service in India.
These steps alone, however, cannot mitigate the effects that digital extreme speech has on society. We need to remember that even as the digital mediates extreme speech, its production and consumption is informed by everyday societal and political contexts.
Strategies to tackle extreme speech on WhatsApp need to bring together the socio-political and the digital. Interventions by the state, civil society groups, and social media companies need a better grasp of how the digital interacts with everyday politics and to design effective bottom-up strategies that engage with citizens at the grassroots.
WhatsApp has made a start by tying up with NASSCOM, the trade association of the Indian Information Technology and Business Process Outsourcing industry, to train citizens to identify misinformation. There is also a need for initiatives that focus not just on the content of messages, but also on using social media to promote counter narratives to the divisive politics of extreme speech that would promote peace in everyday life.
We can derive hope from the fact that in the wake of Pulwama, WhatsApp chats were also used to spread messages of unity between Hindus and Muslims in India, and #SayNoToWar gained widespread support on Twitter when calls for a war with Pakistan were recently made.
Where surveillance of, or attempts to curb the virality of digital extreme speech has its limits, efforts to thwart extreme speech need to focus on acts of counter-speech in the everyday digital as well as non-digital spaces. Counter-speech can be one of the many steps that are needed to address digital extreme speech, especially when the content is not strictly unlawful, but nevertheless harmful.
Lipika Kamra is assistant professor at OP Jindal Global University, Haryana, and Philippa Williams is Senior Lecturer at Queen Mary University of London. They wish to thank the researchers on this project.
Disclosure: This research is funded by WhatsApp. WhatsApp does not oversee the research, nor does it have editorial control over this, or other publications.