Safe harbour rules will not apply if deepfakes are not removed, Centre warns social media firms
The clause under the Information Technology Act protects social media intermediaries from legal action against them for content posted online by their users.
Union Information Technology Minister Ashwini Vaishnaw on Saturday warned social media platforms that the safe harbour immunity clause under the Information Technology Act will not apply if they do not take steps to remove deepfakes, reported PTI.
The safe harbour clause protects social media intermediaries such as Google and Facebook from legal action against them for content posted online by their users.
Deepfakes are techniques to manipulate audio and video content with the help of artificial intelligence software to show people saying or doing things that they never said or did. The content is made to appear as realistic as possible and is often used with malicious intent. Deepfake content poses a new threat to an online ecosystem that already teems with fake photos created through editing software as well as misinformation and disinformation.
On Saturday, Vaishnaw said that the social media platforms have responded to the government’s advisory notice about following the Information Technology Act but added that they need to be “more aggressive” in taking against such content.
“They are taking steps...but we think that many more steps will have to be taken,” the minister told reporters.
He said that the government will soon summon the representatives of the platform to deliberate on the issue and prevent deepfakes. Vaishnaw also added that platforms like Meta and Google would also be called to the meeting.
Vaishnaw’s remarks came days after a video purportedly showing actor Rashmika Mandanna went viral. The original video was of Zara Patel, a British-Indian social media influencer, and the visuals were morphed to show Mandanna’s face instead of Patel’s.
On Friday, Prime Minister Narendra Modi had said that deepfakes have become a matter of serious concern for the country.
The prime minister had also remarked that recently, he came across a deepfake video depicting him playing garba, a Gujarati folk dance. However, fact-checkers have highlighted that the garba video the prime minister referred to is not a deepfake. The video featured a Modi lookalike.
One of the first prominent deepfakes to make headlines in India was that of Bharatiya Janata Party leader Manoj Tiwari in 2020, ahead of legislative elections in Delhi. Tiwari had used artificial intelligence to depict himself speaking in two languages - English and Haryanvi - as he criticised his opponent Arvind Kejriwal.
The MIT Technology Review referred to the incident as “the first time a political party anywhere has used a deepfake for campaigning purposes.”
Also read: Should India be worried about malicious AI tools influencing politics?