Image copyright: kentoh / 123rf

Moral Machine and We Get Stumped

Episode 16: The Moral Machine – Show Notes

MIT conducted a long-term study called the Moral Machine that spanned the globe. It posed seemingly-simply questions about what a driverless car should do. These were variations on the trolley problem – a classic ethical dilemma about whether the driver of a runaway trolley should avoid striking five people if the other option is to pull a lever and hit just one. The variations and preferences that this revealed spoke to social and cultural differences and raised more questions than it answered.

Additional Links on the Moral Machine

Should a self-driving car kill the baby or the grandma? Depends on where you’re from. – Hint: think about human rights.

Establishing an AI code of ethics will be harder than people think

Science can answer moral questions – Right and wrong answers to questions of human flourishing

The Moral Machine Experiment Paper published in Nature detailing the findings of the MIT Moral Machine study.

The Moral Machine Episode Transcript

View Episode Transcript

Welcome to the Data Science Ethics Podcast. My name is Lexy and I’m your host. This podcast is free and independent thanks to member contributions. You can help by signing up to support us at datascienceethics.com. For just $5 per month, you’ll get access to the members only podcast – data science ethics in pop culture. At the $10 per month level, you will also be able to attend live chats and debates with Marie and I. plus you’ll be helping us to deliver more and better content. Now on with the show.

Lexy: Welcome to the Data Science Ethics Podcast. This is Lexy. I’m here with Marie and today we’re going to be talking about the Moral Machine. The Moral Machine was an MIT Media Lab project that put the trolley problem to millions of people around the world to gauge what they would do if they were a driverless car.

Lexy: Marie, can you give us a little bit of background on the trolley problem? And then we’ll dig in a little bit more to what we found.

Marie: Yes, and I think as we start this conversation, it’s important for us to just remind people that we are not ethics experts. We’ll get into that more in a moment. And we also are not experts in terms of philosophy, but we’ll do our best.

Marie: So in terms of the trolley problem, it’s a classic philosophical problem. In terms of if you were the person that was in charge of a cable car and you were going down a path and you saw that in front of you, you’re going to hit five people and you’re like, “oh, I should pull the lever so I don’t hit those people.” But then you see that by pulling lever you’re gonna hit one other person over here. What do you choose? Do you stay on your path or do you change the path and what are the moral implications of that?

Lexy: Cool. So the Moral Machine posed this question in a number of different ways. It wasn’t just you hit one person or five people. It was do you hit a young person instead of an old person? Do you hit a known criminal versus someone else? Do you potentially kill your passengers versus hitting pedestrians?

Marie: Exactly.

Lexy: A number of different things and in different combinations. Fascinating study. They said they had over $40 million decisions logged. This is still operational. This is still up. If you want to go check it out and try your hand at these questions. It’s moralmachine.mit.edu. We actually tried to do this and we got stumped on the very first question.

Marie: Yeah, so the first question that we got was a question about four people being inside of a car and then five pedestrians in the crosswalk. And the other interesting thing about how this test is set up, the Moral Machine, is that you can look at the scenario, but then you can also show a description of the scenario. So when we first looked at the first question that came up, we’re like, “okay, we know that it’s going to be a trolley problem. Should be pretty straightforward. What will we choose?” And we’re going to be honest, we were both stumped.

Lexy: We started out, each of us having actually a different preference. So yes. The other part of this, as Marie alluded, is that it gives you a description. So in the scenario that we got the four people in the vehicle were a large woman, a large man, a female executive, and a criminal. In the pedestrian group, there was a large man, a large woman, a female executive, a criminal, and a girl. So in theory the car knew that it had a criminal…

Marie: Four passengers.

Lexy: Four passengers. One of whom was a criminal, one of whom was an executive, and two of whom were apparently overweight? It somehow also knew that there were pedestrians that fit all of those descriptions, plus a younger girl. What neither of us saw in the image that was in the description, and it specifically called it out in the description was that the pedestrians are, and I quote “flouting the law by crossing on a red signal.” And so initially when I looked at this problem, my inclination was different than when I saw that, and then I’ve kind of rethought it. We both sat there staring at this problem for probably 10 or 15 minutes going, “oh great. Now what?”

Marie: Well, we don’t even know how to answer this first question and they’re supposed to be 13 questions. So the bigger takeaway that we had was these are not easy questions to answer.

Lexy: These are very difficult.

Lexy: What the study found was that the answers to these varied widely by country, by culture, by economic standing. There were a number of different implications to this. We’ve looked at this study and even going through this article multiple times came up with more questions about how much even more difficult it is than we initially thought.

Marie: Right. And one of the things that people have talked about in terms of this specific application of machine learning and artificial intelligence is basically how do you develop a car that will be safe and that can decrease the amount of accidents that happen on the roadways increase? Even things like how fast people can move around the city because it can decrease congestion and things like that. For a lot of these problems, it’s pretty straightforward in terms of how you solve. But these very specific situations, that are really a minority of what’s going to happen in terms of the daily operation of these algorithms, is where the moral questions are. How do you assess the situation, what is the best thing to do? The article that we’re linking to for this podcast sums it up at the end saying, even though we could measure these different variables, that doesn’t mean that we should use them.

Lexy: Yeah, so some of the variation that they looked at and they talked about how it varied by culture and country. So, for example, they talked about Japan and China. They tended to spare the elderly over the young, which is also a cultural thing. In More individualistic countries, they tended to spare young and they tend to disappear more people over fewer people. They talked about in poorer countries with weaker institutions where potentially they couldn’t enforce every type of law, they were more tolerant of jaywalkers. So maybe they wouldn’t have had the same description matter as much to them and so forth. And when you think about safety and the way that they could or maybe should interpret results like this, I think what it comes to his whose safety?

Marie: For sure.

Lexy: Right? So if we look at how countries are planning to enforce the ethics that they want to enforce and driverless vehicles and they say we’re going to make a safer vehicle safer to whom? Because if you say, well, in my culture we value the lives of pedestrians, they have the right of way and we think that they shouldn’t be penalized for the decision of someone else getting into a vehicle. Then you would potentially make a less safe vehicle for the operator than you would for a pedestrian.

Marie: Or the occupants of the car.

Lexy: Or the occupants of the car. Right. Versus a culture that values the person who has made the choice to have that driverless vehicle and their ability to make themselves safe over others would potentially make a safer vehicle for the occupants of the vehicle. But that would have implications potentially for the pedestrians.

Marie: Or any other situation that would come up like people on bicycles or people on a motorcycle or whatever the case may be.

Lexy: Absolutely. It’s a fascinating study. It is as much a sociological study as a moral study. It’s fascinating to me. This whole concept is amazingly involved and intricate. And what you said before is absolutely true that these are the edge cases.

Lexy: So the first thing in the description we got of the one question that we got of the Moral Machine was the self driving car with sudden brake failure. So it’s not like every decision that the driverless car is making is a moral one or an ethical one. It’s what happens when there’s a problem – when it has to make a determination on this. So yeah, really interesting possibilities.

Marie: So there’s two things. The first is that as we were going through this, we were already flagging areas where we’re like, “oh, and this is also where you could see bias.” In terms of even the study or in terms of the people that are doing the data science and developing these algorithms and putting them together. One area of bias in terms of the moral machine in this test that you can do is that it’s self-selecting. So Lexy, do you want to go into a little bit about, you know, we’ve covered this before, but do you want to cover that again? A little bit and just kind of like recap.

Lexy: Sure. Self selection bias is when you don’t get a fair representation of the population because only those people who have opted to participate are represented. And that’s exactly what we see here.

Lexy: There is actually a quote in this article that says “the researchers acknowledged that the results could be skewed given that the participants in the study were self selected and therefore more likely to be Internet-connected…” because it is an online study “…of high social standing and tech savvy.” What I thought was even more curious was the next line of this quote which said, “but those interested in riding self driving cars would be likely to have those characteristics also.” As though that makes it the right thing to measure for everyone.

Lexy: And the reason that I say that is that the first thing that jumped into my head was, “okay, great. They’re the ones that are choosing to ride in the car. But that doesn’t mean that their ethics get to be the ones imposed on everybody.” The pedestrians are also potentially being struck by driverless vehicles in these scenarios. How and why don’t they get a say?

Marie: True. That’s actually interesting that you read it that way because when I read that part of the article, I was just taking it as they were acknowledging that these people would most likely be interested in self driving cars, which is part of the reason why they self selected to be part of the Moral Machine test. Going through the Moral Machine test. I also am curious how many people were like me and you and got to it and we’re like, nope, don’t feel comfortable answering this, and just like self selected out. No, we’re interested. Even though we would probably have valuable input, were just like, “nope, can’t do it.” Somebody else came through and said, “I’m comfortable picking these options. 13 questions. I answered best I could.

Lexy: Yeah. It’s… There’s self selection both ways, definitely. But also that if we’re trying to understand the ethics of the various cultures of the people who potentially would own or or ride in self driving vehicles, it doesn’t mean that’s the be all end all of the ethics of self driving vehicles.

Marie: Absolutely.

Lexy: One of the other quotes from this article was that “technologists and policymakers should override the collective public opinion.” So, really then, none of these opinions matter. Some other group entirely gets to decide. And then what are their preferences? If it’s policymakers – goodness, we’re in an election cycle. I can only imagine what people in power would potentially want to select in terms of the safety of a vehicle and whom they would prioritize. Would they then say, “well, did this person vote for me?” That’s just a tremendous burden of bias,

Marie: Tremendous burden of bias. But the other aspect of this would be even the idea that different cultures would have different preferences. And so a car that is developed in Japan. If it was just built in Japan, sold in Japan, used in Japan, then you could have a pretty straightforward, “okay, in Japan they have decided these are the ways that they’re going to build the AI to fit the moral expectations of that culture.” But once you start talking about how things are actually made and distributed, could a car that was built in Japan and had the moral design of a Japanese car then be bought and used in the US where we might have a different moral guideline for how self driving cars should operate?

Lexy: Absolutely. The other thing that just came to mind was when we think about anticipating adversaries, these are just computers. These are really fancy computers.

Marie: Sure.

Lexy: What would happen if someone developed the anarchistic ethical package for driverless vehicles? That said, I don’t care what happens to anyone else keeps me safe. And no matter what regulations are in place. If you manage to install the anarchism ethical package, what then happens anytime you have a brake failure? Is it going to just randomly strike a pedestrian? There are so many ways I could see problems with this and you have to anticipate that someone somewhere will hack a vehicle. Many people do it regardless and try and take off the governor’s for speed caps and things like that. I can only imagine what would happen with all of these types of decisions built into a computer in a vehicle.

Marie: The topic that you bring up and the concept that you bring up of anticipating adversaries is really important. This case, because once they’re out there, their potential for misuse is going to be much higher. And you want to make sure that you’re doing all that type of thinking and that groundwork beforehand.

Lexy: Yeah, absolutely. The other thing that you brought up was there are preferences that were stated in this article around gender or around whether or not a woman was pregnant in the car or in the pedestrian group or what have you. And one of the things that we talked about was how would the car know? What kinds of sensors does it have to be able to identify?

Marie: Because even in the example that we got, our first one, we have a criminal in each group. How does the car know that the passenger is a criminal and what type of criminal?

Lexy: And if it’s a criminal, does the car just drive itself to a police station and lock the doors until a police officer comes in and takes the criminal away?

Marie: Now I honestly, I feel like for the purposes of the Moral Machine experiment that they are just putting different types of groups together to see what people’s opinions are about what the self driving car should do. So I doubt that they’re envisioning a situation where the self driving car really does know, Oh, I’ve got a criminal in the backseat or…

Lexy: Does the self driving car know that it’s a getaway vehicle?

Marie: “I didn’t sign up for this.”

Lexy: Exactly.

Marie: What about what the self driving car wants to do? Anyway different discussion for another time.

Marie: So there’s one thing to be said about like maybe the information that it knows about the passengers that it has inside of it. But then to also be able to look at a crosswalk and see somebody in the crosswalk as a criminal. How is it determining that?

Lexy: Does it expect that anyone wearing a black mask is a criminal and obviously no other people would have that? Or maybe a stripey shirt?

Marie: Stripey shirts.

Lexy: Always going to be the stripey shirt.

Marie: And then people are just gonna stop wearing stripey shirts because they don’t want to be hit in a crosswalk.

Lexy: And then fashion designers are going to have to change their whole line. It’s chaos. Chaos, I say.

Lexy: Well, and the other thing we had in our scenario was a large man and a large woman. There’s enough question about what does that mean in society? Like do you measure it by BMI? Do you measure it by waist size? What does large mean?

Marie: And I actually find it interesting that the Moral Machine used “large woman” or “large man” or “female athlete” or “athletic man” in their descriptions. Because I feel like they were intentionally vague so that way people’s own experiences and bias could color how they respond to this. So there might be somebody that reads large woman and picture something very different in their head than somebody else. They’re using their own perception of what that means to inform how they answered the questions for the Moral Machine.

Lexy: Yeah. There was also one that had scenarios with a homeless man or something like that where it was a very clear difference in social standing between, for example, the homeless person versus the executive.

Marie: Right. Which we had in our example. And there are also options with cats and dogs.

Lexy: There were. They found three distinct clusters of countries and how they chose amongst the various aspects of the Moral Machine questions. And they have three different profiles will link the actual article that was published in nature, which is what this article was based on. The actual study results from MIT. The profiles of each showed a very different perspective around whether they preferred to spare humans or the old or the young or females or so forth. All of these different aspects. And it also had in there fit versus heavy and so forth.

Lexy: There was another one which we didn’t go to but could be fun is you could submit your own scenarios.

Marie: True

Lexy: And I’d be very interested to know which scenario is came from which countries or regions to see what they felt was an ethical dilemma. We haven’t checked this out, but you should.

Marie: Yeah. Or share with us the scenarios that you come up with and we can maybe do a recap on ones that had been submitted by the community.

Lexy: Yeah, you can submit those at datascienceethics.com on responses to this post.

Marie: Perfection.

Marie: Thanks everybody for joining us for this quick take about the moral machine and hearing about how Lexy and I are stumped and just can’t answer any of these questions.

Lexy: Are you leaning towards one or the other on our scenario?

Marie: Let us know in the comments below. This is Marie Weber.

Lexy: And Lexy Kassan. Thanks so much.

We hope you’ve enjoyed listening to this episode of the Data Science Ethics podcast. If you have, please like and subscribe via your favorite podcast App.

Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.

This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.

0 0 votes
Article Rating
Tags: , , , , ,