WhatsApp banned over 20 lakh Indian accounts between May 15 and June 15
The Facebook-owned messaging platform made the declaration in its first compliance report as mandated under the new IT rules.
Facebook-owned messaging platform WhatsApp on Thursday said that it had banned 20,11,000 accounts of Indian users for misusing the application’s services between May 15 and June 15. The company made the declaration in its first compliance report as mandated under the new Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021.
A sweeping set of rules were issued on February 25 to regulate social media companies, streaming and digital news content. The new rules, which came into effect on May 26, will virtually bring these platforms, for the first time, under the ambit of government supervision.
The rules require social media platforms with more than 50 lakh users in India to publish compliance reports every month, mentioning the details of complaints received and action taken. The platforms also need to mention the number of specific communication links or parts of information they have removed or disabled access to after “proactive monitoring” via automated tools.
WhatsApp said that it had received a total of 345 reports, of which 204 were categorised under “ban appeal”, 70 under “account support”, 43 under “product support”, 20 under “other support...related to requests that are not consistently classifiable”. Eight of the total reports were under the “safety issues” category.
Also read:
- India sent Twitter the most requests for account information
- Over 59,000 content pieces removed in April in India, says Google in its first transparency report
“In addition to responding to and actioning on user complaints through the grievance channel, WhatsApp also deploys tools and resources to prevent harmful behaviour on the platform,” the messaging platform said in its report on Thursday. “We are particularly focused on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred.”
WhatsApp has clarified that over 95% of the bans on Indian user accounts were due to the unauthorised use of automatic or bulk messaging, reported PTI.
“We expect to publish subsequent editions of the report 30-45 days after the reporting period to allow sufficient time for data collection and validation,” the platform said in its statement.
WhatsApp also highlighted that the number of accounts banned have gone up since 2019 because of the advancement of systems. Therefore, the company said, it was “catching more accounts even as we believe there are more attempts to send bulk or automated messages”, according to PTI.
Most of the accounts that send out the messages in bulk are banned proactively without the platforms relying on user reports, WhatsApp said. Close to 80 lakh accounts are reportedly banned across the world on an average every month.
WhatsApp said it usually relies on the behavioural signals from user accounts, or on available “unencrypted information”, profile and group photos, and descriptions as well as advanced tools related to artificial intelligence. These techniques are used to identify and prevent abuse on the social media platform, the company added.
Microblogging platform Twitter, browsing website Google, and WhatsApp’s parent company Facebook have already released their reports.
India made the most requests to Twitter seeking information about accounts, the social media company’s transparency report for July 2020 to December 2020 said on Wednesday. It said that the platform received 1,096, or 46%, more routine requests from India as compared to January 2020 to July 2020 period. Routine requests are legal notices from the government that the social media company must adhere to and submit information of accounts concerned.
On July 3, Facebook said it took action against more than 3 crore content pieces between May 15 and June 15. Of the 3 crore posts against which Facebook said it took “proactive action”, 18 lakh contained adult nudity and sexual activity, 25 lakh were related to violent and graphic content, while 2.5 crore of them were spam.
On June 30, Google published its first monthly transparency report. The search engine said it has removed 59,350 pieces of content in April, following 27,762 complaints from users in India.