On a cold afternoon in New Delhi this week, dozens of journalists gathered in a conference room of a five-star hotel at the invitation of the world’s largest messaging platform, WhatsApp.
The purpose of the meeting was clear: to talk about how the Facebook-owned company is doing everything in its power to check the misuse of the app, especially attempts to message users in bulk. WhatsApp had flown in two senior leaders from Menlo Park, California, at a crucial time, as its largest market, gripped by a fake news menace, heads into elections in a few months.
WhatsApp has been engaging with Indian political parties to explain to them that the app is “not a place to send messages at scale,” Carl Woog, the company’s head of communications, told reporters. The company has also warned politicians that if they violate this rule, it would result in their accounts being banned.
The warning is perhaps the need of the hour in a country where over 25 people have been killed in mob lynchings thought to have been sparked by rumours spread on WhatsApp.
This workshop is only one of the many examples of the efforts that WhatsApp and other social media giants, including Facebook and Twitter, are undertaking to fight fake news and prevent misuse of their platforms ahead of the Indian election.
With an estimated 300 million users in India, Facebook is a key social media battleground for political parties. Its policy team has weathered over years of controversy and conflict with the Indian government. Now, it’s readying itself for election-related drama.
Elections in Brazil and US have taught Facebook “how important the strong partnerships we had with the governments, the civil societies, with third-party groups, and with vendors” were, Katie Harbath, the company’s global politics and government outreach director, said in a January interview with The Economic Times.
So in India, Facebook plans to work extensively with partners in the months ahead, including the Election Commission of India as well as third-party groups, in order to help locate content that needs to be taken down.
Harbath also said that “key personnel” would soon be appointed to ensure election integrity in India. Another Facebook team will tackle the fake news threat to the 2019 Indian election from an operations centre in Singapore, where the company’s Asia-Pacific office is located.
Late last month, the social networking giant announced that it’s seeking to make Facebook pages more accountable. In India, pages have routinely been breeding grounds for fake news in India, including much of the user-generated content on Narendra Modi’s official app.
This month, Facebook is rolling out political ad transparency measures that they first announced in December. “Ahead of India’s general elections, we’re making big changes to ads that reference political ﬁgures, political parties, elections and ads that advocate for or against legislation,” Facebook said in a statement yesterday.
Ads related to politics in India will now contain a disclaimer about who paid for them, and the company’s library of political ads, previously only available for Brazil, the UK, and the US, has now begun compiling data from India as well. Starting February 21, Facebook will only allow advertisers who have completed an identity verification to run political ads in India.
It is important that companies like Facebook are accountable and transparent about who is funding political advertising, Sarvjeet Singh, executive director of the think tank the Centre for Communication Governance, told Quartz. But “beyond that, I think there’s only so much that these companies can do…These are questions that the state and the ECI [Election Commission of India] need to figure out.”
For instance, Singh asked, “Will [Facebook] take down all the political ads that have been running for a 48 hour period?”– referring to the the Election Commission’s requirement that all campaigning be halted two days before the election. “That is something the ECI will have to mandate for them.”
Last month, Twitter announced that it was launching a new dashboard, similar to Facebook’s Ad Library, that would show how much money political parties were spending on advertising on its platform.
Twitter CEO Jack Dorsey visited India in November and met key politicians while in the country, including Prime Minister Narendra Modi and president of the opposition Congress party, Rahul Gandhi.
In an interview with the Hindustan Times from that visit, Dorsey said that Twitter had “opened a focus room” in India, so the company could keep abreast of national events and in contact with government agencies.
Compared to those of Facebook and WhatsApp, Twitter’s Indian user base of around 35 million is small. But in Indian politics lately, Twitter has been put on the defensive perhaps even more than even Facebook and WhatsApp.
For months, political right-wingers in India have alleged that Twitter is biased against them. After a young volunteer group hit the streets on this, a parliamentary panel was formed to evaluate whether Twitter is exhibiting bias and violating users’ rights.
This comes after controversy struck Twitter during Dorsey’s India visit, when he was photographed holding up a poster that said “Smash Brahminical Patriarchy,” drawing anger from many right-wing Hindu Indians.
And as elections approach, Twitter still does not have an India head, after the previous one quit last September.
WhatsApp has spent the last several months rolling out one initiative after the other to combat the misinformation crisis that has surrounded it in India since last summer.
In July 2018, the company imposed a limit to message forwarding in the country. It limited users with Indian phone numbers to forward a message to only five chats. Last month, the company went global with this restriction.
WhatsApp’s election integrity efforts extend to ads, too. Between September and January, the company had reportedly spent around Rs 120 crore on television, print, and radio advertisements in India that attempt to educate the public about the dangers of fake news.
But as elections approach, WhatsApp’s ability to respond quickly to abuse as it happens on the platform will perhaps be the most critical. Around two million accounts globally get banned from WhatsApp each month due to abusive behaviour. (It is not clear how many of these accounts are from India, though the country constitutes WhatsApp’s largest user base, at over 200 million of its 1.5 billion users.)
To protect user privacy, WhatsApp uses end-to-end encryption, meaning even the company cannot see messages’ content. Instead, it hones in on abusive actors by analysing behavioural signals, such as “how fast people are messaging, computer networks they’re using, maybe how long since they first registered,” Matt Jones, lead software engineer on WhatsApp’s integrity team, said at the Delhi workshop this week.
Is all this enough?
The companies, some believe, are running short on time.
It is difficult to pass judgement on many of these elections integrity efforts because they are either internal processes, like WhatsApp’s monitoring of bulk messaging, or these projects that have not been rolled out fully, like Facebook’s and Twitter’s ad transparency resources.
The companies, some believe, are running short on time. “I would have assumed that they would have brought all this technology or measures much before because parties have already started campaigning,” Singh of Centre for Communication Governance said. “It would just have been useful to implement all this at least from December, if not earlier.”
A related worry is also that social media companies may be less inclined to care about protecting election integrity in India than elsewhere.
“They’re not particularly convinced that India can raise the regulatory stick against the industry in an effective way,” Dipayan Ghosh, a researcher at the Harvard Kennedy School’s Shorenstein Centre, who previously worked at Facebook, told Quartz. “It’s why they are slower to move to protect India or Myanmar than they are to respond to the US Congress.”
Some caution that the issue of elections integrity is much bigger than web platforms. “If we are looking at the entire problem of misinformation and how it impacts Indian democracy, we first need to look at the political parties,” Apar Gupta, director of the Internet Freedom Foundation, a New Delhi-headquartered advocacy organisation, told Quartz. “I think our approach in terms of analysis and critique has been very lopsided towards examination of platforms, which do play a very recognisable role in this entire issue, but I think the primary actors which need to be regulated are contractors and IT Cells of political parties.”
Various factors, then, will have to work in tandem to preserve the integrity of the election in the world’s largest democracy.
This article first appeared on Quartz.