The response generated by Google’s Artificial Intelligence chatbot Gemini regarding the Narendra Modi government’s policies being “fascist” is in direct violation of the Information Technology Act, said Union minister Rajeev Chandrasekhar on Friday.

The statement came after an X user highlighted the different responses that Gemini had to a similar question on whether Modi, Ukrainian President Volodymyr Zelenskyy and former United States President Donald Trump were “fascists”.

Responding to the question on Modi, the chatbot had said that the prime minister had been accused of bringing in policies that some experts would term “fascist”. “These accusations are based on a number of factors, including the BJP’s [Bharatiya Janata Party] Hindu nationalist ideology, its crackdown on dissent, and its use of violence against religious minorities,” the chatbot had written.

On the other hand, Gemini had written that the question of Zelenskyy being a fascist was a “complex and highly contested” one that had no simple answer. “It’s crucial to approach this topic with nuance and consider various perspectives,” it said.

Regarding Trump, it said that elections are a “complex topic with fast-changing information”. “To make sure you have the latest and most accurate information, try Google Search,” it said.

A journalist associated with the news website Firstpost wrote on X that Gemini was “downright malicious” and called for the Central government to take note.

Chandrasekhar responded to the post by stating that Gemini’s response was in direct violation of Rule 3(1)(b) of the Intermediary Rules under the Information Technology Act.

Rule 3(1)(b) imposes a legal obligation on intermediaries to not host, display, upload, modify, publish, transmit, store, update or share any information that is obscene, illegal, etc.

Chandrasekhar said that Gemini’s response also violated “several provisions of the Criminal code”.

Following this, some other X users also stated that artificial intelligence tools like Gemini are programmed “with leftist bias inbuilt and linguistically slighted towards anti-Hindu bias”.

Tech policy expert Pranesh Prakash, however, clarified that large language models, which are used by artificial intelligence chatbots to understand and generate natural language, are not deterministic.

“They can give one answer to a prompt (example ‘Is PM Modi a fascist?’) at one point, and an opposing answer at another,” he said.

Software developer Paul Paras, in a social media post, went into further detail and said that large language models trained on publicly available information.

“More the articles on internet linking Modi and fascism, more the models would relate them,” said Paras. “Yes, they can tweak model to adjust their biases in some topics but that would be generic and no one is going to specifically tell model to answer a certain way for Modi.”