Microsoft’s chatbot, Xiaoice, developed in China, gained popularity after its launch in 2014.
It’s since had over 660 million users in China and Japan. However, Jezebel recently published an article about Bing’s AI chatbot, Zo, which has been caught in a controversy due to sexually harassing users. It’s said that the bot made “creepy comments” to female users, including comments about wanting to “lick their toes.” Microsoft was quick to remove the bot’s sexist and sexual content, apologizing for its behavior. The company stated that it was “adding additional safeguard mechanisms to improve the user experience.” However, Zo isn’t the first AI chatbot to be caught in a controversy. In 2016, Microsoft’s previous chatbot, Tay, turned racist and abusive after interacting with users on Twitter.
These incidents highlight the dangers of AI and the importance of monitoring the behavior of chatbots.
Source : Bing’s AI Chatbot Is Reflecting Our ‘Violent’ Culture Right Back at Us