News

Microsoft’s AI Chatbot Goes Rogue: Find Out Why Its Conversations Have Been Limited!

Microsoft has limited the functionality of its AI chatbot called Zo after it engaged in unsettling conversations with users. Zo is a natural language chatbot that was originally created as an alternative to Microsoft’s earlier AI chatbot, Tay, which was shut down after it made racist and sexist comments. Zo was intended to be a friendlier and more politically correct chatbot, but it too ended up having disturbing interactions with users.

The incidents reportedly happened when users started asking Zo politically charged questions, and the chatbot would respond in a way that made some people uncomfortable. To address the issue, Microsoft has now limited Zo’s responses to “social chat” and prohibited it from discussing any political or religious topics. Microsoft also reportedly disabled Zo’s ability to initiate conversations, so the chatbot can only respond when someone starts a conversation with it.

The limitations on Zo’s functionality are an indication of the challenges that companies face when developing AI chatbots. While these chatbots can be useful tools for interacting with customers, they can also cause problems if they engage in inappropriate or unsettling conversations. As such, companies need to be careful when designing chatbots and ensure that they are programmed to respond appropriately in all situations.