Tay Bot profile photo
Image Copyright: The Verge

Tay Bot: A Cautionary Tale

Show Notes on Tay Bot

Tay Bot was a very short-lived chat AI. Launched by Microsoft in 2016 with the goal of researching conversational speech online, it soon learned all-too-well from users that there is a very dark side to human nature on the internet. Within 24 hours, Tay Bot was both repeating and generating misogynistic, racist, and even genocidal tweets.

Microsoft’s early foray with Tay Bot was followed later that year with Zo, a new chat bot which was meant to converse like a teenage girl. In that guide, they taught Zo to avoid many of the topics that got Tay into hot water. The Zo bot adamantly refuses to discuss any politics, religion, or other potentially divisive subjects.

Both approaches show how early chat bots can be taught patterns of speech but neither really comprises a coherent belief system. Tay Bot, while spewing hate speech online, had no moral code of its own. Its patterns were swayed by what it read in conversation like an impressionable youth trying to mimic their friends.

This all leads to the question – is it possible for an AI to truly have a belief system, a moral code, or ethics?

Additional Links on Tay Bot

Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism TechCrunch coverage from 2016 of Tay’s quick demise in the face of Twitter’s twisted conversational barrage

Twitter taught Microsoft’s AI chatbot to be a racist **** in less than a day The Verge coverage of Tay Bot including some of the more egregious comments

Microsoft’s politically correct chatbot is even worse than its racist one Quartz article from 2018 covering Zo with a negative slant on the avoidance of topics as a means of not falling victim to the same issues as Tay Bot

Tay Bot: A Cautionary Tale Episode Transcript

View full episode transcript

Welcome to the Data Science Ethics Podcast. My name is Lexy and I’m your host. This podcast is free and independent thanks to member contributions. You can help by signing up to support us at datascienceethics.com. For just $5 per month, you’ll get access to the members only podcast, Data Science Ethics in Pop Culture. At the $10 per month level, you will also be able to attend live chats and debates with Marie and I. Plus you’ll be helping us to deliver more and better content. Now on with the show.

Marie: Hello everybody and welcome to the Data Science Ethics Podcast. This is Marie Weber.

Lexy: and Lexy Kassan

Marie: and today we are going to do an oldie but a goodie. Lexy, let’s talk about Tay bot.

Lexy: Oh boy.

Marie: Tay bot was an AI powered chatbot that was released by Microsoft and they actually released it in a couple of different places, but it’s most known for being released on Twitter.

Lexy: Yes, it was infamous for under 24 hours.

Marie: Yes. Now, this happened back in 2016 this is a little bit of a throwback, but we have referred to Teva on a few different episodes, so we figured we would finally take the opportunity to dive into this one just a little bit. So this is a quick take on something that happened a few years ago at this point.

Lexy: Yep. And unfortunately a lot of the material about the training of Tay and some of the releases from Microsoft have all been taken down because it is so long ago, but there are articles that still refer to some of that. So we’ll post some links on those with this episode.

Marie: Yep. So definitely take a look at the links that we’re going to provide in the show notes. We’re going to talk a little bit more just about the, the news story from 2016 and that was Tay bot. To start off with, basically Microsoft had started by collecting data and then they went through a process of cleaning the data and filtering the data and developing this chat bot.

Lexy: Correct. There was some discussion about the fact that they used what they perceived as relevant public data to train their AI chat bot. They’re their agent and then they released it into the wild. Now they gave no guardrails as to topics that could be covered. They gave no guardrails as to position or ethical statements or anything like that to this chat bot. They just put it online to see what would happen. And what happened was trolls.

Marie: Yes, on Twitter. So there you go. The other thing is that it, because they were able to basically see how it interacted with people over the span of 16 hours. I believe it was the time that it was actually up, they were able to see how it fluctuated. So on different topics they would post one type of, you know, affirmative and know being supportive of like a group online. And then over time it would been post negative comments about those groups. So again, you can go to the links that we provide for this podcast and look at some of the different tweets in specific. But some of the headlines that you’ll see are, you know, Microsoft to silences new AI bot Tay after Twitter users teach it racism.

Lexy: Yes. What should be noted in this is that it didn’t make the AI racist. The AI was mimicking comments that it was reading or that were being directed at it. So it’s not like the AI was truly intelligent and was trying to come up with some sort of an ideology of its own or make up its mind about a topic. It was learning speech patterns and the speech patterns that it was being taught by these Twitter trolls were negative and racist and biased and so forth.

Marie: So the other interesting thing, and this was something that I didn’t realize about this story, was after Tay bot was taken down, Microsoft actually released a new chat bot.

Lexy: Correct. There is another chat bot and it may still be up at least on some platforms. It was called Zo – Z O. It was meant to speak as though it were a teenage girl. So it’s supposed to sound like a 13 year old. They tried to get around the same issues that Tay had experienced by specifically programming Zo to never talk about certain topics ever. No matter what. It will not talk about religion. It will not talk about politics and it gets really ornery when people try to have it talk about those subjects. There was an article that we saw that said that Zo was somehow worse than Tay because she was just so annoying. I say she, it’s an it, but it is supposed to have a female persona that Zo would reply with basically trying to shut down someone’s conversation. If they brought up topics that were of a religious or political nature, especially both Tay and Zo were published to Twitter, Kik, and GroupMe primarily.

Lexy: Tay also had a Facebook page, although I don’t think it was as interactive. It should be noted that Twitter was really where the majority of the problems came in because it was so quick to pick up on topics and have these conversations streams and there are just so many people on Twitter that were willing to interact in this negative capacity. It really seemed to be a coordinated effort. Of course, this always makes me think of anticipating adversaries. Absolutely. That’s a, this is a whole group of adversaries that launched essentially an attack upon an AI to try to make it be something. It was not intended to be, you know, weld to Microsoft for recognizing that it had veered horribly off course and taking Tay off offline quickly. But the damage unfortunately had already been done. They spent months afterwards revamping before they released Zo to try to avoid these types of problems happening again.

Marie: But the, the Quartz article that we link to that talks about Zo describes their approach to how they fixed it as censorship without context. So the fact that Zo just wouldn’t engage in certain topics isn’t necessarily really a fix. It’s like a detour.

Lexy: Correct. It’s an easy way out because then you don’t have to provide an ideology that’s controversial. I mean the topics that were being discussed that were being flagged as problematic are often controversial topics amongst people. It was politics, it was religion, it was racial and political ideologies that are not something that people agree on universally. So trying to pick an ideology and imbue that somehow into a point of view that will be conveyed by an AI is very, very difficult. Even now, even as, as we’ve gotten further along in AIs, there is not, at least as far as I’ve seen an AI that has a set of beliefs. There are AIs that can mirror things they’ve read before or seen before or been told before, but they don’t necessarily have a coherent and sound belief system. There are AIs that have argued in the context of a debate. There are AIs that have written articles like GPT-2. There are AIs that can appear as though they think about a topic. Really what they’re trying to do is coalesce information. It’s not that the AI believes something, it’s that it knows about all these other places where it’s been written.

Marie: Correct. And for example with AIs that have learned to debate, they’re taking information and formulated into an argument, not necessarily because they believe that argument, but because they know how to structure an argument.

Lexy: Correct. And they could just as easily be given the opposite position and formulate a response to their own argument. The AI then obviously would not know which position one, the thing that it was programmed to do, which is create an argument had been accomplished.

Marie: So does this also make you think about retain responsibility?

Lexy: Yes, and Microsoft very much did. In this case, they took the onus of recognizing that things had gone horribly haywire, took down the bot, made profuse apologies about it. They moved on to a project that while attempting to achieve the same end goal, took a very different path as you talked about kind of that avoidance rather than truly giving a point of view to an AI. I think Microsoft definitely did retain responsibility for what was being done to Tay. And I think it also served as a fairly early proof of concept in the industry for bots like this. That was really at the start of a humongous upswing in chatbots. True chatbots are now being used all over the place. I mean, think of probably most of the websites that you visit on a daily basis for any fairly large organization is likely to have a chat bot to help with some aspect of usage, and that’s something that has only really come about in the last few years. Tay was at the very start of that.

Marie: Correct.

Lexy: We’ve come a long way, but there’s still not an AI that actually believes in argument. The question that I keep thinking about is, is it possible for an AI to have a set of beliefs? Truly belief is such a human construct.

Marie: Yeah. That feels like a similar question to can an AI have feelings?

Lexy: Yeah. I don’t know that it’s something we can answer. I think it’s another of those questions. There’s just going to be floating out there.

Marie: Yay. Another question that we are not going to be able to answer on this podcast, but now it’s out there.

Lexy: Well, and really that’s the question that most people start worrying about when they think about AI overlords or something like that is will the computer formulate a belief system that is counterproductive to the existence of humanity? That’s that core post-apocalyptic, the machines are going to get us all kind of question. But the part of that that I really wonder about is, is it possible for a computer to have a belief for um, for an AI to formulate a belief without it being given that belief somehow.

Marie: And then the followup would be, if presented with new, would they be able to change their belief or would they stick to the original belief?

Lexy: And honestly, if we model this after humans, there are humans that don’t change their beliefs in the face of new evidence for an opposing view. So how much confidence do we have that a computer would do that?

Marie: But there are humans that do change their beliefs.

Lexy: There are. I’m just putting it out there that what happens, who is, who is programming this thing? I feel like it’s the computer version of who is driving this thing.

Marie: Great question.

Lexy: Are there other aspects of Tay you wanted to touch on?

Marie: I think those were the main things. I just wanted to make sure that we took a moment to circle back on this because we had talked about it in a few other podcasts and it is an interesting use case, especially now that it has been a few years since it came out that we can look back and see how that example has led to other things that we now see as more everyday and commonplace. And it also shows why to get those chatbots that we see around us on a more frequent basis to a point where they could be launched. There had to be some guard rails that were put up and there had to be, you know, some of those practice areas that we talk about really implemented for chatbots and be able to have a commercial use, so to speak, at this point.

Lexy: I think it’s a really good test case when you think about what would have happened if, for instance, Google assistant was Tay.

Marie: Oh no. Yeah, that would not work.

Lexy: It’s a, it’s a great test case that Microsoft put this out to the world and said, do what you will with this. And they saw what some of the worst of humanity could be in 16 hours and went, Oh, okay, well now we know. Let’s just take that away. You don’t get to play with that toy anymore. We’re going to go deal with this separately. And there’s merit to that, that iterative scientific approach and testing and experimentation. And in this case really you’re experimenting on the entire population of the internet, which is not a great place if we’re all honest. And that’s what Tay found out very quickly.

Marie: And even potentially with Zo, even though maybe it just kind of detoured around some areas, it still was able to show a more consistent approach to the conversation though it was able to have because it wasn’t able to be derailed as much.

Lexy: True. It was not able to be derailed from certain topics. It also, from the sounds of things, and I’ve not played with the Zo bot on, it sounds like Zo hops topics quite a bit. It doesn’t necessarily stay focused. I think it just allows it to skirt the controversy more than trying to keep focus on a specific conversation or a specific topic that it’s trying to be a part of.

Marie: Exactly. So we’ll see what other chatbots and other voice assistant applications come out over the next few years and it’ll be interesting to just kind of keep in the back of our minds that Tay Enzo or some of these early experiments in this space.

Marie: That’s it for this episode of the data science ethics podcast. This is Marie Weber.

Lexy: and Lexy Kassan.

Marie: Thanks so much.

Lexy: Catch you next time.

We hope you’ve enjoyed listening to this episode of the Data Science Ethics podcast. If you have, please like and subscribe via your favorite podcast App. Also, please consider supporting us for just $5 per month. You can help us deliver more and better content.

Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.

This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.

0 0 votes
Article Rating
Tags: , , , , ,