On Tuesday, social media giant Meta announced that starting with the US, it will end its third-party fact-checking programme. Under this initiative, 90 organisations across 130 countries partnered with Meta to flag misinformation on its platforms Facebook, Instagram and Threads.

The technology company had started the programme in 2016. It will now switch to a crowdsourced fact-checking model like X’s Community Notes. This system allows users to suggest clarifications that are displayed next to posts they perceive to be misleading.

Even as the shift has only been announced for the US for now, social media policy experts told Scroll that the decision is likely to be rolled out in countries too. In the US, experts said that the move was aimed at improving relations with US President-elect Donald Trump before he takes office later this month. Trump has frequently made untruthful or misleading assertions.

The pandering to power raises questions for a country like India where the government has been aggressive in taking down critical voices on social media.

How will the changes in Meta policy work?

Under the existing system, Meta reduces the visibility and reach of a post if it agrees with its partner fact checkers on the information being misleading or false. In addition, a label is added to the post that gives users context about misinformation, with a link to a fact check.

Meta’s new community notes model closely mimics the policy applied by X, which is owned by Elon Musk, a close associate of Trump. According to X’s stated policy, community notes are admitted randomly from accounts that have signed up for the programme. However, a note is not made public based on the opinion of the majority of contributors, but only if accounts from “diverse perspectives” rate a proposed note as helpful.

Dhruv Garg, a technology policy researcher at Bengaluru-based think tank Indian Governance and Policy Project explained how this works. “Diversity of perspective relies on the metric of whether enough people who have disagreed on something in the past are agreeing now,” he told Scroll. X claims that this model is a “fair and effective way to add information”.

On Tuesday, Meta claimed that this approach had worked for X and was “less prone to bias”.

Photo: Josh Edelson/AFP
Meta CEO Mark Zuckerberg has justified the policy changes saying that “too much harmless content [was getting] censored” on his platforms.

However, experts are wary about whether this model can work effectively. Jhalak Kakkar, executive director at the Centre for Communication Governance in Delhi’s National Law University said that the community notes system suffers from the polarised nature of social media discourse in general. “People from differing ideologies rarely build consensus on social media,” she said. “So, the notes often do not become public and even when they do become public it is not in a timely manner and hence misinformation goes unchecked.”

Moreover, there remain question marks over the credibility of the content of community notes.

Neil Brown, the president of American media school The Poynter Institute, said that there are no consequences for writing an inaccurate community note. “With fact checkers, you know what their methodology is and who funds them,” Brown told Scroll. The Poynter Institute runs the International Fact Checking Network, a global coalition of more than 170 fact-checking organisations.

Bending to Trump

Meta’s decision to tweak its content moderation policy came close on the heels of a personnel change that experts said was a precursor of things to follow. On January 2, the company appointed Joel Kaplan, a Republican lobbyist in the US, as its chief of global affairs.

The appointment was evidence of Meta’s pivot towards conservative politics in the wake of Donald Trump’s presidential win in November, said Raman Jit Singh Chima, the policy director for the Asia Pacific region at Access Now, a non-profit that focuses on digital civil rights.

Chima also pointed out that in addition to phasing out the fact-checking model that was long criticised by Trump and his supporters, on Tuesday Meta also tweaked its policy on what amounts to “hateful conduct” on its platforms.

In the revised policy, Meta has deleted warnings against racism, homophobia and Islamophobia that were part of its previous guidelines issued in February 2024. Among the guidelines deleted were those that prohibited depictions of “women as objects” and “black people as farm equipment”.

Meta has justified these changes along with the shift from third-party fact-checking by contending that under the current system, “too much harmless content [is getting] censored”.

On this too, social media policy watchers have fact checked the technology company. Brown of The Poynter Institute pointed out that fact-checkers never censored content on Meta platforms. “Fact checkers offered independent review of the posts using Meta’s tools and rules and we showed our sources,” he said. “Then it was up to Meta to decide what to do.”

Baybars Orsek, the vice president of fact checking at British anti-misinformation startup Logically Facts, agreed that fact checking mechanism “making too many mistakes” was not the reason for the policy shift, as Zuckerberg claimed on Tuesday. Orsek pointed out that in a transparency report for Europe published by Meta as recently as in October, it had said that only about 3% of its fact-checks were erroneous were affected on appeal.

Political impact in India

While all the experts Scroll spoke to agreed that Meta’s policy shift panders to the incoming Donald Trump administration, they were not immediately clear about how it might impact social media in India, if and when the changes are rolled out in the country. The reason for this, they said, was the lack of transparency from Meta on how they were planning to implement the changes even in the US.

“The fact that I should have an answer on the possible impact in India, and I do not, is itself alarming,” said Chima of Access Now. Chima told Scroll that this was even more concerning because of Meta’s track record of favouring the ruling Bharatiya Janata Party.

“Meta has not been transparent in its relationship with the ruling party,” Chima said. “There has always been concerns that Meta prioritises its political relationships in India over safety on its platforms or even applying its own rules consistently.”

Technology policy researcher Prateek Waghre expressed concerns on whether Meta’s community notes model, which has not been tested at this scale, would be able to check the proliferation of hate speech or misinformation.

Studies have shown that this concern is not unfounded. An investigation by the European Fact-Checking Standards Network last year showed that nearly 69% of tweets that fact checkers found to be false or misleading had not been moderated by X's community notes programme.

Waghre said that if Meta rolls out the community notes model in India, it could result in voices favouring the government hijacking the system. “The situation in India is already lopsided with the government pushing for its own fact-checking unit and using the Information Technology Rules to take down content that is critical of it,” he told Scroll.

Photo: Reuters
Meta has not been transparent about its relations with the ruling BJP, experts said.

Orsek of Logically Facts was also wary of the community notes model being overwhelmed by “voices that are more active on social media”, which he said was a trend he had observed globally. He said that Meta could, in fact, perform worse than X in safeguarding against this due to the nature of discourse on its platforms.

“X is like a townhall where anyone can grab a microphone and talk,” he explained. “But Meta has been pushing people to engage in closed groups, so it runs a greater risk of becoming an echo chamber. It will be difficult for Meta to find people offering different perspectives which is essential for community notes to work.”

Orsek also expressed concern about the changes in Meta’s policy on what constitutes hateful conduct on its platforms, asserting that these guidelines should not be the same for all countries. “What is hate speech in India may not be hate speech in the US,” he said.

Chima concurred on the risk posed by changes in hate speech policy, saying that they now give a larger degree of benefit of doubt to what social media users can say online.

“The fact that Meta has done this because of the incoming administration in the US is very alarming,” Chima told Scroll. “It raises the concern that Meta will drastically change course to keep promises made to whoever might be in power in a country. If that can happen in the US, it can happen in India too.”