The Centre on Tuesday issued an advisory to all social media platforms to comply with the Information Technology Rules amid “growing concerns around misinformation powered by AI-deepfakes”.

Deepfake is a technique for manipulating audio and video with the help of artificial intelligence software to depict people saying or doing things that they never said or did. The content is made to appear as realistic as possible and is usually used with malicious intent.

The advisory states: “Content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b) must be clearly communicated to the users in clear and precise language including through its terms of service and user agreements.”

Rule 3(1)(b) mandates social media platforms to communicate their rules, regulations, privacy policy and user agreements in the user’s preferred language. Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation.

“Misinformation represents a deep threat to the safety and trust of users on the Internet,” Union Minister of State for Electronics and Information and Technology Rajeev Chandrasekhar said. “Deepfake which is misinformation powered by AI, further amplifies the threat to safety and trust of our Digital Nagriks [digital citizens].”

The Centre’s advisory also states that social media companies must make users aware of penal provisions of the Indian Penal Code, the Information Technology Act and other laws that may apply in such cases.


Also read: Centre to frame regulations to tackle deepfakes soon, says IT minister Ashwini Vaishnaw