

According to Microsoft, the aim was to "conduct research on conversational understanding." Company researchers programmed the bot to respond to messages in an "entertaining" way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. On Wednesday morning, the company unveiled Tay, a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. But the bottom line is simple: Microsoft has an awful lot of egg on its face after unleashing an online chat bot that Twitter users coaxed into regurgitating some seriously offensive language, including pointedly racist and sexist remarks. Amid this dangerous combination of forces, determining exactly what went wrong is near-impossible.
#Microsoft chatbot tae series
We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.It was the unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. To do AI right, one needs to iterate with many people and often in public forums. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. In that sense, the challenges are just as much social as they are technical. AI systems feed off of both positive and negative interactions with people. Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

We will take this lesson forward as well as those from our experiences in China, Japan and the U.S.
#Microsoft chatbot tae full
We take full responsibility for not seeing this possibility ahead of time. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. The logical place for us to engage with a massive group of users was Twitter. It’s through increased interaction where we expected to learn more and for the AI to get better and better. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience.

for entertainment purposes – is our first attempt to answer this question.Īs we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. Tay was an artificial intelligence chatbot that was originally released by Microsoft Corporation via Twitter on Mait caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. I want to share what we learned and how we’re taking these lessons forward.įor context, Tay was not the first artificial intelligence application we released into the online social world.
#Microsoft chatbot tae Offline
Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. As many of you know by now, on Wednesday we launched a chatbot called Tay.
