On October 6, news website The Wire reported that Instagram, owned by Meta (formerly Facebook), took down a post by user @cringearchivist claiming it violated the social media site’s content guidelines.

The post, which was a video of a man performing aarti to an idol of Uttar Pradesh Chief Minister Adityanath, had been flagged down for violating the platform’s standards on “sexual activity and nudity”.

The publication followed up with a report on October 10 claiming that Meta took down this post at the direction of Amit Malviya, the head of the Bharatiya Janata Party’s social media cell.

The Wire claimed that Malviya has special privileges through an Instagram programme called X-Check that ensure that any posts that he reports are removed from the platform immediately, “no questions asked”.

Meta spokesperson Andy Stone denied the allegations on Twitter.

Play
The original Instagram post by user @cringearchivist which was taken down by Meta.

The Wire also wrote a follow-up story claiming that Stone sent an email to his colleagues asking them to put two journalists from the publication on a “watchlist”. Read about the spat between The Wire and Meta here.

In this very public battle, a few important questions about Meta’s content moderation practices remain unanswered.

  1. Which content moderation benchmarks were violated by @cringeactivist?
  2. Instagram’s communications with the user claimed that the post related to sexual activity. What was the rationale for tagging a video of a man praying to an idol as sexual content?
  3. Was this content moderation decision taken by an automatic computer system or humans or both?

At the time of writing this piece, the post has not been reinstated. When Scroll.in contacted Meta’s communications team, they responded by sharing the link to their official statement saying:

  1. Users with Xcheck/cross-check privileges cannot have content removed from the platform with no questions asked.
  2. Questionable posts are surfaced for review by automated systems, not user reports.
  3. The Wire’s articles are based on fabricated documents and screenshots.
  4. There is no Meta “watchlist” for journalists.

The statement did not address the questions about content moderation. When Scroll.in sent Meta follow-up questions regarding content moderation, the company declined to provide “additional inputs”.

A history of Meta’s struggle with content moderation

While the post by @cringearchivist did not contain any nudity but was taken down for violating guidelines about sexual content, in 2019 the platform allowed Brazilian footballer Neymar to share naked photos of a woman who had accused him of rape.

At the time of the incident, Neymar had over 150 million followers on Instagram, making him one of the most influential accounts on the platform.

A report by The Wall Street Journal claimed that Neymar was protected by the X-Check programme.

Meta, which owns Facebook, Messenger, WhatsApp and Instagram, has a history of taking controversial content-moderation decisions on its plaforms to avoid offending local governments, influential public figures, while it continues to struggle with its understanding of cultural context and over-reliance on Artificial Intelligence.
Created on Canva.

Meta, which owns Facebook, Messenger, WhatsApp and Instagram, has a history of taking controversial content-moderation decisions on its plaforms to avoid offending local governments, influential public figures, while it continues to struggle with its understanding of cultural context and over-reliance on Artificial Intelligence.

Examples include:

1. In September 2020, Meta took down a post by a Facebook user in Colombia that contained a cartoon resembling the official crest of the National Police of Colombia, depicting three figures in police uniform holding batons over their heads.

This decision was overturned by the company’s Oversight Board only last month.

The Board’s report noted that the platform’s use of “media matching service banks” amplifies the impact of incorrect decisions by individual human reviewers. These banks automatically identify and remove images that have been identified by human reviewers as violating the company’s rules in the past.

Simply put, one wrong decision can hurt many users.

Facebook’s Oversight Board was founded in 2018 and acts as the Supreme Court for Meta’s content moderation policies. It has received over 1.6 million cases since its inception.

2. In India, Facebook did not take down a video of BJP minister Anurag Thakur leading chants of “goli maaron saalon ko” (shoot the traitors) at a rally in North West Delhi’s Rithala in January 2020. “Sala”, which literally means brother-in-law in Hindi, is also used as an expletive.

It is unclear why this video did not violate the platform’s guidelines against hate speech. It is still available on Thakur’s verified page.

The video resulted in the Election Commission imposing a three-day ban on campaigning by Thakur.

In October 2021, Facebook whistleblower Frances Haugen claimed that the platform was aware of anti-Muslim posts in India, but took little action because of “political considerations”.

hate speech, content moderation
Created on Canva.

3. During the protests against the Citizenship (Amendment) Bill in India in 2019-’20, Facebook ignored several posts on the platform calling for protestors to be killed.

One such example was the videos uploaded by Rambhagat Gopal Sharma. On January 30, 2020,Sharma, with a pistol in his hand, walked up to students protesting outside Delhi’s Jamia Millia Islamia University. As he opened fire at the crowd, one person was injured.

The vitriolic videos he posted on his Facebook page the same day as the shooting went viral for hours. They were taken down only after outrage by the media and concerned citizens.

At the time, Facebook explained, “The gunman did not use Facebook Live during the shooting, and FB Live content shared prior to the shooting did not expressly indicate his intent to carry out this violent act.”

4. In March, Instagram took down a video depicting the sexual assault of a tribal woman by a group of men. The video was uploaded by an Indian account meant to be a platform for Dalit perspectives and the face of the victim was not visible in the post.

Human reviewers hired by the platform determined that the content violated Meta’s Adult Sexual Exploitation policy and the video was taken down.

The decision has been contested and Meta’s Oversight Board has invited public comments.

This is an example where platforms like Meta struggle to find the balance between newsworthiness, understanding of socio-political context, and what harm could be caused by allowing depictions of sexual harassment.

5. In January, Facebook removed a post by an Urdu-language media organisation from India. The post was about the Taliban announcing that schools and colleges for women and girls would reopen in March.

The platform claimed the post violated their Dangerous Individuals and Organisations policy, which prohibits “praise” of entities deemed to “engage in serious offline harms”, including terrorist organisations.

The post was removed and “strikes” were imposed against the page administrator, limiting their access to certain Facebook features (such as going live on Facebook).

The administrator appealed the decision and was sent to HIPO (high-impact false positive override), a system that the platform uses to identify cases where it acted incorrectly. However, nothing happened as Meta only had 50 Urdu-speaking reviewers allocated to HIPO at the time.

This particular instance shows Meta’s struggle with content moderation in languages other than English.

The Oversight Board found that the Community Standards and internal guidance for moderators are not clear on how the praise prohibition and reporting allowance apply, or the relationship between them.

“This raises serious concerns, especially for journalists and human rights defenders,” they said. “In addition, sanctions for breaching the policy are unclear and severe.”

Created on Canva.

6. In May 2021, Facebook took down a video posted by a regional news outlet in Colombia that showed a protest with protestors marching behind a banner that says “SOS COLOMBIA”.

Protesters can be seen singing in the video, criticising the then president Iván Duque Márquez’s tax policies. As part of their chant, the protesters call the president “hijo de puta” (son of a bitch) and say “deja de hacerte el marica en la TV” or stop being the fag on TV.

The video was shared close to 20,000 times and was reported by fewer than five people. Facebook claimed it violated the platform’s Hate Speech Community Standard and removed it.

This decision was overturned by its Oversight Board in September 2021.

The guidelines against hate speech are meant to protect minorities, in the LGBTQ+ community.

7. In April 2021, a Facebook user in Myanmar wrote a post discussing ways to limit financing to the Myanmar military following the coup in the country in February that year.

The post had half a million views but was taken down by the platform even though it was not reported by any users.

Facebook translated a part of the user’s post to mean “Hong Kong people, because the fucking Chinese tortured them, changed their banking to UK and now [the Chinese], they cannot touch them.”

Four content reviewers at Facebook examined the post and concluded that it violated the platform’s guidelines against hate speech against Chinese people.

But the Oversight Board in August 2021 noted that the post referred to China, not Chinese people and ordered it to be restored.

Created on Canva.

8. In July, Facebook took down a caricature of Iran’s Supreme Leader, Ayatollah Ali Khamenei which showed his beard forming a fist grasping a woman wearing a hijab. The woman is depicted blindfolded with a chain around her ankles.

The text accompanying the caricature called for “death to anti-women Islamic government and death to its filthy leader Khamenei” while calling Iran the worst dictatorship in history, in part due to restrictions on what people can wear.

The user called on women in Iran not to collaborate in oppressing women.

While the users’ appeal was never reviewed due to lack of content moderation staff, the post was restored by Meta in August 2022 after the Oversight Board decided to look into the matter.

Such examples raise questions about how platforms should treat rhetorical calls for violence against prominent personalities, posts around anti-hijab protests (which can be considered a human rights issue) while keeping in mind the levels of free expression in different countries, internet censorship and how specific governments respond to their critics.

9. In January 2021, a Russian Facebook user wrote about about the protests in support of opposition leader Alexei Navalny held in Saint Petersburg and across Russia. An argument ensued amongst other users in the comments section.

A user (later referred to as Mr X) commented that “protesters in Moscow were all school children, mentally ‘slow’ and were ‘shamelessly used’” on the post. Another user (later referred to as Mr Y) challenged this comment by saying they were elderly and participated in the protest. Mr Y then called Mr X a “cowardly bot”.

Facebook took down Mr Y’s comment as it violated its Bullying and Harassment Policy. However, the comment by Mr X calling protestors “mentally slow” did not trigger any alarm bells.

The platform’s decision was challenged and overturned by the Oversight Committee because “the Community Standards failed to consider the wider context and disproportionately restricted freedom of expression”.

10. In 2019, The New York Times reported on Facebook’s failure to take action against fake news being shared on its platform by users in India against Rohingya Muslims. Apart from posts threatening to burn their houses, stories falsely accusing Rohingya Muslims of cannibalism also went viral on the platform.

Many of these posts were shared by Hindu nationalist supporters of the BJP at a time its leaders peddled anti-Rohingya rhetoric during an election campaign.

How effective is Meta’s Oversight Board?

While having an Oversight Board is seen as a radical move for a Big Tech giant and has been hailed as a “one of a kind” measure, the platform and the board still face several complex challenges.

Meta's oversight board has been criticized for turning in late verdicts and has only been able to address a miniscule number of content moderation issues plaguing Meta.
Created on Canva.

The board has been criticised for turning in late verdicts and has only been able to address a minuscule number of content moderation challenges plaguing Meta.

In the first quarter of 2022, the board received 479,653 appeals by users and Meta. Only 20 of those user-submitted cases were shortlisted. The platform reversed its decision in 70% of these instances.

Two case decisions and one policy advisory opinion were published by the board in this period, which contained 22 recommendations for Meta. Some of these have been implemented in full and some in part.

The board claims that it continues to lack data to verify progress on or implementation of the majority of recommendations.

Moderation process

A study commissioned by the New York University and the Stern Centre for Business and Human Rights in 2020 paints a grim picture.

Facebook content moderators spend less than a minute to decide if a post should be taken down from the platform.
Created on Canva.
  1. Facebook, which has over 2 billion users, employs only 15,000 content moderators across the world.
  2. More than 3 million items are reported on a daily basis by users and AI as potentially warranting removal.
  3. The prescribed “average handling time” set for moderators for checking a post is 30 seconds to 60 seconds. That means over an eight-hour shift, minus washroom and lunch breaks, an average moderator checks 600-800 pieces of content.

The South Asian advocacy group Equality Labs published a separate report in June 2019 that described hundreds of memes and posts targeting Indian caste, religious, and LGBT minorities.

They brought this content to Facebook’s attention over the course of a year but the platform failed to remove it.