Facebook has been selectively curbing hate speech, misinformation and inflammatory posts –particularly anti-Muslim content – in India, according to more than a dozen leaked internal memos and studies seen by the Wall Street Journal, the New York Times and the Associated Press.

Facebook sees India as one of the most “at-risk countries” in the world, meaning that the company recognises it needs better algorithms and teams to respond to events in almost real-time. India is Facebook’s largest market with at least 34 crore Facebook accounts and 40 crore WhatsApp users.

The platform’s woes have been exacerbated by its own “recommended” feature, and a dearth of reliable content moderation systems, the papers shared by whistleblower Frances Haugen reveal. And employees’ concerns over the mishandling of such issues, and the viral “malcontent” on the platform appear to have been swept under the rug for the most part.

Track record

Last year, the Wall Street Journal reported allegations of Facebook favouring Prime Minister Narendra Modi’s Bharatiya Janata Party. The whistleblower’s exposé doubles down on these claims, providing further evidence that Facebook’s former public policy director Ankhi Das, who had personally shared Islamophobic content, asked employees to not apply hate speech rules to posts of certain BJP politicians.

The author of a December 2020 internal document notes that “Facebook routinely makes exceptions for powerful actors when enforcing content policy”. In the same memo, a former Facebook chief security officer says that, outside the United States, “local policy heads are generally pulled from the ruling political party and are rarely drawn from disadvantaged ethnic groups, religious creeds, or castes”, which “naturally bends decision-making towards the powerful”.

Anti-Muslim propaganda

Much of the rhetoric on Facebook’s platforms teeters on India’s religious fault line – and the company seems to be quietly letting it all go.

A case study from March this year shows Facebook was debating whether it could control the “fear-mongering, anti-Muslim narratives” pushed by the Rashtriya Swayamsevak Sangh, a far-right Hindu nationalist group to which Modi belonged during his youth.

In a document titled “Lotus Mahal” – the lotus is the BJP’s party symbol – Facebook noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content, ranging from “calls to oust Muslim populations from India” and “love jihad”, an unproven conspiracy theory by Hindu hardliners who accuse Muslim men of using interfaith marriages to coerce Hindu women into changing their religion.

There were also Hindu nationalist groups with ties to the ruling party that continued their activity despite posting inflammatory anti-Muslim content, including “dehumanising posts comparing Muslims to ‘pigs’ and ‘dogs’ and misinformation claiming the Quran calls for men to rape their female family members”, the documents show.

In more extreme cases, such hate speech and fake news has lead to physical harm in the real world. In February last year, a politician from Modi’s party posted a video on Facebook to call upon his supporters to remove mostly Muslim protesters from a road in New Delhi, sparking riots that killed 53. A New Delhi government committee found Facebook complicit.

After the pandemic hit, “coronajihad”, a term blaming Muslims for intentionally spreading the Covid-19 virus, started circulating on social media. It took Facebook days to remove the hashtag, and by then, doctored video clips and posts purportedly showing Muslims spitting on authorities and hospital staff had already made the rounds. The conspiracy theory cost some Muslims dearly, leading to violence, business boycotts, and even jail time for some.

Despite all the internal deliberations, Facebook did not kick out these hardline Hindu groups, citing “political sensitivities”. Facebook spokesperson Andy Stone told the Wall Street Journal on October 23 that the company bans groups or individuals “after following a careful, rigorous and multidisciplinary process”, and some leaked reports were working documents that were still under investigation.

Content moderation

But Facebook’s own failing systems are likely getting in the way of these so-called investigations, too.

Back in February 2019, just before India’s most recent general elections, a Facebook employee created a test user to understand what a new user in the country would see on their news feed if all they did was follow pages and groups solely recommended by the platform itself.

During this time, there was a militant attack in Kashmir, which killed over 40 Indian soldiers. In the aftermath, Facebook groups got flooded with hate speech and unverified rumours, and viral content ran rampant, the documents show. The new user’s recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content.

Many of the rumours and calls to violence against Muslims were “never flagged or actioned” because Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. In a statement to the Associated Press, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has resulted in “reduced the amount of hate speech that people see by half” in 2021.

“Following this test user’s news feed,” the researcher wrote, “I have seen more images of dead people in the past three weeks than I have seen in my entire life total.”

This article first appeared on Quartz.