Imagine if a “Boycott all Black people” hashtag was trending on Twitter in America. Or a “Boycott all Jews” trend in Europe. Do you believe the social media company would remain silent and continue amplifying posts from users that are blatantly bigoted and could easily be considered hate speech?
Yet over much of the weekend, users in India saw trending topics that were equally discriminatory. Among the most popular was “#मुस्लिमो_का_संपूर्ण_बहिष्कार”, meaning ‘total boycott of Muslims.’ Click through to the hashtag and you will see demands for Indians to boycott Muslims economically, to ban Muslim schools, to describe all Muslims as terrorists and references to other hashtags that add in homophobia to the anti-Muslim bigotry – as well as more than a few pushing back against the sentiment.
The Wire reported that many of the accounts posting these tweets are followed by ministers in the Bharatiya Janata Party government. It also pointed out that the tweets and the trending hashtag could be seen as illegal under Indian law.
While the question of its legality should not be ignored, the implicit BJP support for such a hashtag is not surprising. After all, this is a party whose president Amit Shah – also the Union Home Minister – continues to make promises at political rallies that are both unconstitutional and also blatantly bigoted.
What is a little more murky here is the role played by the social media company. A Twitter spokesperson told NewsCentral24x7 that it had “prevented the hashtag from trending”, despite multiple users continuing to say they could see it on the list of trending topics. The tweets themselves remained on the social media site, even though one could argue that they fall afoul of Twitter’s hateful conduct policy.
Despite those rules, which every big tech company has in some form, it has become more and more apparent over the last few years that social networks were built in a way that encourages hate speech. From Facebook to Twitter to YouTube and even Instagram, it has become clear that this sort of material does not just exist on these platforms, as it did on the world wide web before the social media behemoths took over. Instead, hate speech is now thriving.
Why is that? Because Facebook and Twitter aren’t just neutral conduits for information. They may seem like bulletin boards, where anyone can put up a bit of information. But they are actually carefully ordered hierarchical systems that promote certain kinds of content, namely the sort that is likely to get the most engagement.
Over the past year, YouTube has been the clearest example of this. As internet scholar Zeynep Tufkeci has argued, the recommended videos that show up alongside – and usually play automatically after – the video you came to see, regularly pushes you towards more extreme content.
“Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with – or to incendiary content in general,” she writes. “It promotes, recommends, and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”
Indeed, it is the algorithms and hierarchies that make social media networks different from bulletin boards. A few scattered users or even hundreds of thousands from an organised team may have been tweeting incendiary stuff about Muslims, but if it were not for Twitter’s trending topics or a user interface that sometimes will show you popular tweets from those you don’t follow, most people would not even be aware.
Trending topics displays this problem well. By listing them in plain sight for all users to see, Trending perforce amplifies a hashtag or subject no matter what it is or how offensive it may be.
And users have found it easy to game the system. On any given day, it is likely that most hashtags are the result of campaigns explicitly aimed at getting a phrase onto the trending topic bar, whether that is something offensive or promotional content for a movie.
“Despite being a highly arbitrary and mostly “worthless metric”, trending topics on Twitter are often interpreted as a vague signal of the importance of a given subject,” wrote Charlie Warzel in the New York Times earlier this year.
NY Magazine’s Brian Feldman made a similar point in 2018: “The first problem with ‘trending’ is that it selects and highlights content with no eye toward accuracy, or quality. Automated trending systems are not equipped to make judgments; they can determine if things are being shared, but they cannot determine whether that content should be shared further.”
And yet they are deciding whether something should be shared further, simply by amplifying it. Remember, Twitter is not a public utility – it is not mandated to carry all information people post and reserves the right to take down or refuse to carry certain kinds of material. Indeed, the service has, on paper, a strong set of rules against hateful conduct and has even intervened to take down posts and hashtags in the past, albeit more commonly in the United States where it is under more scrutiny.
It seems unlikely however that a network the size of Twitter, with a presence in scores of countries, would be able to manually or even algorithmically pick up hate speech globally. A decision back in 2009 to delete hashtags about “darkies” – considered a slur in the US but acceptable in South Africa where the tag had originated – is a reflection of this.
The size of this policing problem and the company’s unwillingness to acknowledge that it is privileging some topics through a system that is easily gamed prompted The Verge’s Casey Newton to suggest, earlier this year, that “it’s time to end ‘trending’ on Twitter,” saying “at best, it’s worthless — and at worst, it’s actively harmful.”
This is indeed what Twitter, and society in general, should be debating. Discourse over such matters tend to be binary – ban or break up the social media companies or allow them to permit everyone to post, even if they are pushing hateful content.
There is a more nuanced discussion to be had here about how the social networks are designed and what they choose to amplify. An old maxim used to be that Facebook did not create racists, it only connected them. But research into how algorithmic extremism on YouTube has shown that connecting them could actually have the effect of radicalising too.
More in-depth discussions and civil society actions have pushed other industries and private companies to alter their behaviour in the past in ways that make society better for people in general. We need to apply the same lens to the social media companies that, for better or worse, have tremendous control over the flow of information and, as a result over our lives.