Online extreme speech has become a political flash-point in India, as political parties are testing out tactics and pumping in campaign content across social media channels to sway voters in the ongoing national elections. Recent trends of disinformation, rumour mongering, hateful expressions and covert campaigns of targeted appeal raise a troubling question: are these trends taking us to the dark side of the internet?
It is important to avoid an alarmist account of digital manipulations, especially since online electoral campaigning has involved “needs based assessments” of civic grievances among voters at a grassroots level. However, these efforts pale in comparison to manipulative demands political parties place on digital media in terms of using them as instruments of targeted appeal for electoral gains.
By definition, extreme speech can be subversive or dominating, depending on who uses it and in what context. Despite various contestations and precisely through such disagreements online, a significant part of political extreme speech in India today pivots around the rhetoric of nationalism. A critical view of what is unfolding on the ground reveals at least three major challenges in tackling extreme speech that normalises exclusion.
Challenges in tackling extreme speech
First, globally, content moderation around extreme speech has been difficult partly because of its elusive formats, inability to account for context, and coded language. Internet memes illustrate how the ambiguity of humour poses challenges to regulation.
In India, machine learning and automated content moderation must be equipped to address the country’s vast linguistic diversity, as well as account for evolving vocabularies and verbal play. Online hate, for instance, can often be cloaked in seemingly innocuous expressions and jocular formations.
The second challenge concerns the nature of distribution of extreme speech. Social media has enabled political parties to circumvent possible barriers in mainstream media and raise a parallel structure of extremely intrusive campaign networks.
For instance, the Bharatiya Janata Party’s WhatsApp groups until the electoral booth level have created a distributed network of targeted appeal. The official top-down, pyramidical structure of WhatsApp content distribution is linked to a vast, dispersed, multi-nodal network of volunteers via various actors at intermediary levels such as smart phone owners and local influencers.
A triadic, multi-step structure of WhatsApp distribution of the party strategically blurs the boundaries between political campaigning, family messaging and friendly banter. Controlling the spread of extreme speech becomes difficult because it circulates within these nebulous networks of trust. The Congress and other parties also rely on the same ecosystem and similar logics.
Although WhatsApp has claimed that it has raised “speed breakers” to disinformation by limiting the number of forwards, in practice this measure has only increased the number of people engaged in the spread of messages.
According to company representatives who interacted with academics at a recent symposium in New Delhi hosted by the Kofi Annan Commission, multiple networks would still raise the cost of abuse since it is computationally easier to detect sharing across a greater number of groups.
However, it is unclear whether its existing capability matches the vast volunteer and “official” distribution structure of political parties and their rapidly adapting strategies to detect possible gray areas for content and distribution. Also, how is the problem of trust addressed through quantitative barriers?
Third, the diversity of actors. Elsewhere, I have proposed that we can identify at least five prototypes of online extreme speech actors: party worker online (among whom ideological affiliation precedes online uptake); techie-turned-ideologue (highly motivated educated class of volunteers who vouch for the fact that they do not take a penny for their online work); bhakt business (in which idolatry, ideological affiliation and commercial interests enter a win-win relation); monetised Hindutva (ideological content solely for commercial purposes); and digital influencers and political intelligence consultants (politically agnostic commercial players). Tackling online extreme speech means devising policies that can address these different actors, and the specificities of their operations.
Regulating commercial players
Two approaches are urgently needed.
First, the focus on commercial players. Automated lurkers, sorters, and messengers have become the new players of politics today. This is the new phase of capitalism whose most powerful actors, that is, multinational companies, do not hesitate to shake hands with political power for campaign agendas.
The controversial Cambridge Analytica case has been a watershed in exposing how social media and data analytics companies manipulate public life, but the trends are growing. Despite global scrutiny and well-intentioned introspections, multinational and home-grown companies are only innovating more on their lurking, data gathering and analytics technologies. Such innovations, at times, benefit from a lack of streamlined regulation for data collection.
Indian political parties across the spectrum see this evolving ecosystem of large and small players as an opportunity. While some commercial players get into direct business agreements, others have allowed their services to be used for political campaigning with no standardised code of conduct, and little to no oversight.
Recent regulatory efforts to adhere to a voluntary code of conduct are promising, but it is important to ensure such co-regulatory efforts continue beyond the election period. The fate of the Codes of Ethics and Broadcasting Standards in commercial news television is a sobering reminder of the severe limitations of self-regulation.
Moreover, even as Facebook, WhatsApp and Twitter have come under the radar for regulatory actions, TikTok, ShareChat, Helo and other mid-range platforms have started providing new means to share political content and peddle partisan positions.
Machine learning has helped large companies such as Facebook to block fake accounts and address downstream issues to a certain degree, but online extreme speech cannot be tackled by content moderation alone.
Policy efforts should understand and target media practices – what people do with media tools. For instance, labelling messages as “forward” is interpreted sometimes as an obligation to forward. Effects can be just the opposite of what is intended.
Keeping track of practices on the ground will also reveal changing tactics. Facebook’s “watch party” function is a popular option following restrictions on WhatsApp forwards and group size.
At the same time, sharing hateful expressions online is not experienced as hate speech by online users, but often as “fun” and “responsibility”. These experiential realities cannot be adequately addressed by calling vitriol “hate speech”. Indeed, such moralising positions can backfire since users soon resort to defensive counters. A key strategy in this case is to mark users as offenders using expressions that are sensitive to language differences and terms that are prevalent locally.
A daunting task is to address covert connivance and acquiescence between governments and social media companies. Global regulatory interventions should mount pressure on social media companies to implement the norms with equal rigour within all national and regional contexts. The framework of eight Ds – deletion, demotion, disclosure, dilution, delay, diversion, deterrence and digital literacy – is a good starting point.
Cooperation and collaboration
To achieve this, it is important to turn to the second approach – cooperation between IT companies and civil society. These efforts should actively involve local online groups already formed by political parties and voluntary organisations.
Organic interventions in terms of repurposing existing WhatsApp groups and connecting them with Right to Information activists and community forums would be one strategy for a robust people-centric approach.
A vast network of informal labour engaged in extreme speech, half-hearted measures of social media companies, political expedience, and ideological hegemony have together augmented the conditions for online extreme speech in India.
Divisive extreme speech riding on encrypted messaging services is an immediate problem, but the ecosystem is deeper than prevailing fads in online communication. Only a people’s movement together with institutional changes can help develop a long-lasting alternative, holding both corporate firms and governments accountable.
Sahana Udupa is professor of Media Anthropology at LMU Munich, Germany.