Prejudice, hate speech, and extremism have emerged as key concerns for policymakers in North America and Western Europe. White extremists have been active users of digital communications technology for three decades. However, in the last decade, social media has provided significant opportunities for right-wing extremism, which raises a crucial problem for North American and Western European societies.

Hostility to Islam and Muslims unites the far right today and plays a significant role in building transnational audiences. Turning to India and its diaspora, it is worth noting that “Hindu nationalist and alt-right ideology both pivot on Islamophobic constructions of the Muslim” as an enemy. Similar themes about multiculturalism, Islam, and anti-liberal discourse are also salient in Indian diaspora supporters of right-wing populists.

Consequently, looking to North America and Western Europe bears useful lessons for considering responses to prejudice and harmful extreme speech in India as well as the globe.

Social media platforms that rely on user-generated content have enabled right-wing extremists to assemble large audiences. Moreover, they have been adept at cloaking their prejudicial, hateful, and violent ideas as humour and irony.

For example, despite protest from many employees, YouTube executives refrained from taking down right-wing extremist conspiracy theories, disinformation, and rhetoric because the company was focused on increasing its engagement metrics. This was a boon for right-wing extremists in recent years, who benefitted from and gamed algorithms that amplified their content and allowed them to monetise prejudice and hate.

Indeed, even after the Christchurch terrorist attack, far-right political activist Paul Golding of Britain First was able to purchase advertisements on Facebook’s platform. Previously, Facebook refused to use its hate speech rules against Britain First and another far-right activist Tommy Robinson, arguing that they were expressing legitimate political speech, rather than hate or extremism.

As these views proliferated online, key influencers in the US, UK, and Western Europe capitalised on this engagement to expand their social movements. Simultaneously, representatives of radical right political parties, such as UK’s Nigel Farage, who is a Member of the European Parliament, and the US’ Senator Ted Cruz, chastised these companies for “censoring” legitimate conservative voices.

Action taken by social media companies

Nevertheless, growing concern from civil society about hate speech and right-wing extremism has led to some concrete action taken by social media companies. In December 2017, Twitter removed numerous alt-right accounts from its platform, prompting a migration to Gab.ai, an “alt-tech” platform rife with extremist speech.

A terrorist that attacked a synagogue in Pennsylvania, US, in October, was an active user of the site. Facebook took down Britain First’s page in March 2018, which had amassed over 1 million likes, after its leaders Jayda Fransen and Paul Golding were sentenced to prison in the UK for racially aggravated harassment.

YouTube, focusing on Tommy Robinson’s page, has also begun to prevent recommendations of extreme content and far right conspiracy theory.

However, these content takedowns are more the exception than the rule when it comes to right-wing extremism. This was not the case with jihadists, who have rightly faced significantly more sanctions and account suspensions, leading to a significant decrease in their ability to use mainstream platforms.

Do counter-narratives work?

For right-wing extremism, social media platforms have leaned more on counter-narratives that seek to challenge extremist narratives rather than take them down. The success of such measures is unclear, despite much enthusiasm for the approach amongst government and social media platforms.

Facebook recently decided to make this a platform-wide policy, introducing a ban on “white nationalist” content and redirecting those who search for terms associated with this ideology to an anti-hate group, Life After Hate.

Such developments should be met with open arms – challenging this ideology is absolutely necessary, and it may help to challenge a small number of those who do hold such views. However, most of the evidence suggests that changing minds is an unlikely outcome of such programmes.

Content takedown is also imperfect; research demonstrates that these extremists are likely to migrate to other platforms or to encrypted messaging applications. These platforms are harder to monitor and have little to no content moderation unlike major platforms such as Facebook, Twitter, and YouTube.

Such action, when taken against Islamic State supporters, reinforced their sense of connection to the group. It also opens platforms to attacks from politicians who claim that this represents censorship of influential conservative voices.

Shifting the debate

Recently, I argued that these dynamics make the cultures of prejudice, hate, and extremism online ungovernable. Handling these dilemmas requires making difficult choices and re-evaluating how we debate the limits of free speech in democratic societies.

It is imperative that we shift the debate. If the terms of the argument are grounded in the potential harms that such speech can cause, content takedown can be pursued responsibly. While pushing these individuals off mainstream platforms to more remote corners of the internet may make it harder to monitor them and sharpen their sense of being repressed by the so-called “establishment”, the mainstream attention they crave and monetise can be minimised.

The harms that prejudice, hate, and extremism online cause disproportionately affect the Muslim community as consistency of attacks on mosques and increased reports of hate crime across the West demonstrate.

In India, these effects have also been felt, and raise similar challenges about the freedom of speech and the security of all citizens. The use of WhatsApp to spread rumours – which are often anti-Muslim nature – has led to the death of innocent Muslims. Extreme speech online can have devastating material consequences, and by focusing on harm as a balance to the freedom of speech, our attempts at regulating digital extreme speech can be improved.

An ethical and moral position

The overwhelming response across the globe has been to rely on counter-narratives and prioritising free speech above the harms that this speech may cause. This response implicitly holds that the right to express prejudice and hate speech is a high priority than the protection of those who may become victims of it. This is a particularly American position, and it is one that has dominated the moderation strategies of social media platforms.

In North America and Western Europe, this position can be challenged by making the case that the entitlement to live free from fear must be balanced with free expression. This is an ethical and moral position – not necessarily a legal one – that shifts to a regulatory strategy that seeks to facilitate a civil, plural public sphere that respects the dignity of its participants.

Making this moral argument can help win the debate over free speech and challenge the weaponisation of the alleged “censorship” of the right. Both these challenges are being raised in India today.

Right-wing groups in India have made the claim that their voices are being “censored”. Just like the Republicans in the Senate, in February, a BJP MP raised questions about the alleged restrictions on right-wing groups’ freedom of speech before a parliamentary committee on Information Technology.

There are increasing resonances between right-wing parties across the world making the claim that their supporters are being censored. Of course, these politicians rarely recognise the harms that the speech of some of their supporters may cause – rather, they fixate on the entitlement to free expression without balancing it with an ethical commitment to dignity and freedom from fear.

Reframing the arguments underway globally on hate, prejudice, and extremism on social media with a balance between dignity and free expression can help us work towards a constructive dialogue in which we can focus on fighting the harms of right-wing extremist speech. Focusing on dignity, freedom from fear, and civility can enhance our ability to challenge those that would seek to use freedom of speech as a shield for bigotry and hate.

Bharat Ganesh is a researcher at the Oxford Internet Institute, University of Oxford.

This is the eighth part of a series on tackling online extreme speech. Read the complete series here.