Microsoft apologises for 'offensive and hurtful' tweets by its AI chatbot Tay
The company was forced to shut down the program after the bot learned to parrot anti-Semitic, racist and sexist posts posts by human Twitter users.
Tech giant Microsoft on Friday said it was "deeply sorry" for offensive tweets generated by its so-called "chatbot" launched this week. The bot, known as Tay, was Microsoft’s attempt at engaging millennials with artificial intelligence to “experiment with and conduct research on conversational understanding”, reported The Guardian. However, the company was forced to apologise and shut down the program after the bot went on an embarrassing tirade, as it learned to parrot anti-Semitic, racist and sexist posts by human Twitter users, reported Reuters.
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s vice president of research. Microsoft also said in a blog post that it would revive Tay only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that "undermine the company's principles and values".