AI Has Americans Worried

Image Copyright: Sergey Kovalkov / 123rf

Episode 23: Concerns About AI – Show Notes

Horror stories of AI gone mad are everywhere in science fiction – but are they likely to become reality? Many Americans now believe so. Based on a recent Vox article covering a study from the University of Oxford, we discuss the top concerns about AI on the minds of Americans, how this varies between groups, and where we should look for more answers.

Additional Links on Concerns About AI

The American Public is Already Worried about AI Catastrophe – Vox article describing the survey of current fears surrounding AI and its implications.

Artificial Intelligence: American Attitudes and Trends – Original study from the Center for the Governance of AI, Future of Humanity Institute at the University of Oxford. Check out Section 3: Public Opinion on AI Governance.

Concerns About AI Episode Transcript

View Full Transcript

Welcome to the Data Science Ethics Podcast. My name is Lexy and I’m your host. This podcast is free and independent thanks to member contributions. You can help by signing up to support us at datascienceethics.com. For just $5 per month, you’ll get access to the members only podcast, Data Science Ethics in Pop Culture. At the $10 per month level, you will also be able to attend live chats and debates with Marie and I. Plus you’ll be helping us to deliver more and better content. Now on with the show.

Marie: Hello everybody and welcome to the data science ethics podcast. This is Marie Weber and Lexy Kassan and today we’re going to talk about an article that was on Vox.com and it is the American public is already worried about AI catastrophe. So this is an article that was written by Kelsey Piper and there was just a lot in here. So I wanted to do a quick take on this article and some of the things that it brings up.

Lexy: There’s a ton, it’s a very meaty article.

Marie: So we’ll have a link in the show description and information, but we’ll kind of just jump into the meat of it. The interesting thing about this was they were serving the public and they found that people were already worried about different types of AI and not just AI in general, but some of the more short term impacts as well as longer term impacts.

Lexy: Yeah. There were a number of different categories. The way that they surveyed them was they had 13 different topic areas that they could potentially respond to. Each respondent was shown five different topics and asked to rank them as not concerning at all or highly concerning and on a scale of zero to three, zero to three in that they found a number of different patterns and topics that rose to the top. The foremost one from the sounds of it was concerns about data privacy where AI would have access to a ton of information and what happens when big data gets even bigger and big data becomes big brother and so forth.

Marie: and they didn’t necessarily go into all the specifics of what this means. Again, these are very broad topics, but we’ve talked about some of these topics on some of our different episodes, so another area of concern was digital manipulation which relates to fake images and even fake video, which we’ve talked about on a previous episode with a deep fake episode. Exactly, and then AI enhanced cyber attacks and surveillance were two other areas that people were concerned with in terms of shorter term risks.

Lexy: So that might be things like being able to anticipate adversaries when the security companies are looking at cyber attacks, understanding that it’s not just bounded by the limits of what people can do, even large numbers of people, but what machines can do with AI, with distribution of attack and so forth.

Marie: Absolutely. And then the other area that’s very interesting is what people are concerned about in terms of tomorrow’s concerns, so not necessarily the short term, but a little bit more on the longer term and that includes things like technological unemployment, which we have some podcasts coming up where we’ll go into that topic a little bit more and then failures of value alignment, which is an interesting topic.

Lexy: That one to me sounds a lot like our moral machine episode where we talked through the moral machine experiment that was going on as to what should the autonomous car choose to do if it has to make a tough choice.

Marie: Absolutely.

Lexy: The one about technological unemployment is I think one of the top concerns. It certainly is one we hear about a lot. That there is going to be displacement because robots are gonna take the jobs. This was something that happened during other industrial revolutions. They talk about this rise of AI, the rise of machine learning as the fourth industrial revolution. So we will definitely have a lot more coming on future podcast episodes around the fourth industrial revolution.

Marie: Absolutely. And then there’s also people worried about critical AI safety failures that kill at least 10 percent of people on Earth, which I found that particular specificity very interesting that it wasn’t just that there will be critical AI safety failures that end up harming people. It kind of goes back to the moral machine episode that we did, but that it’s at least 10 percent of people on Earth.

Lexy: Well, I think they had to provide some sort of guidance as to what it means for it to be a catastrophe because one person’s catastrophe is another person’s non issue. So it has to be as a pervasive enough problem that they, that most people would quantify it as such.

Marie: Yeah. And so again, it’s interesting because I don’t think many people would quantify AI risk like this, but they were basically point on a scale of a pandemic, something that would happen.

Lexy: Absolutely something along the lines of, you know, the black death that happened and wiped out a third of Europe’s population, which would be about 10 percent of the world’s population, most likely. So really, uh, an interesting parallel. Yup.

Marie: So the interesting things that this article covers is one, these concerns are already out there and it delved a little bit into who has these concerns.

Lexy: Yeah. This was another instance where different segments of the population seem to be worried about different things. Kind of like again, with our moral machine episode, we saw different areas of the population leaning towards a different answer of which which choice would be better for the machine take. Yep. In this, the premise was the hypothesis was that it was the more tech savvy, more AI aware kind of geeks in silicon valley that would have the most concern and certainly there have been a number of very prominent technology leaders who have come out with concerns about AI, but what actually seemed to be the cases that those people were more comfortable with AI, they were more comfortable with the technology as it exists today and the promise that it gives for the future. Whereas people who are less tech savvy, not really in the industry, we’re even more concerned about what AI could mean for them and for the world.

Marie: Yeah. And now this survey isn’t necessarily that large, so they didn’t break it down so much by different demographics and what they were concerned about.

Lexy: Can I just say that I’m really excited that they didn’t do that because spurious results are bad. And as a statistics geek, it made me really happy that the article mentioned not to do this if you didn’t specifically designed the survey for it.

Marie: I agree. So thrilled. Very good. But they were able to have some larger takeaways and including that women seem to be more concerned than men. That again, people in the field seem to be less concerned than people out of the field and people with higher incomes seemed to be less concerned than people with lower incomes. So the other thing that it made me think about is one of the principles that we’re going to talk about in a future episode, which is incorporate inclusivity,

Lexy: right? Incorporate inclusivity, as a brief synopsis of what we’re going to touch on much more deeply later, is the idea that, as part of the data science process, it’s important to bring in other perspectives so that you’re not pigeon holed by your own biases, by your own perspective. That other types of thought processes, other types of perspectives can be at least consulted if not utilized within the process to ensure that whatever you’re doing with the data, with the algorithms that you’re developing, you’re being conscientious of how this would impact more people than those like you

Marie: And from a user experience perspective, including these types of perspectives early on will help you develop a better model and a better process and a better product because you’re not going to end up developing something in a silo that then when you bring it to the public, you get reactions where people are like, why did you do that? I don’t think that was a good idea. That actually doesn’t have the outcome that I would expect. Including those people early in the process can help you uncover things faster so you can build a better model and bring better things to market.

Lexy: I think about even the episode we did on the Google gorillas.

Marie: Exactly.

Lexy: In getting sufficient variation in your data. Similarly, you would want sufficient variation in your thoughts around how to use that data.

Marie: Another big takeaway is that AI will affect everyone. One of the quotes from this article is that this poll suggests that almost everyone has some reservations about it. So incorporating people in this process to address those reservations is going to be really key.

Lexy: I think it speaks to the fact that there needs to be further dialogue between those who are developing AI and those who are being impacted by it or are worried about being impacted by it. Because if the only quotes you hear are technological leaders who are saying that general AI is going to be a real problem or it will have tremendous economic ramifications and bad things will happen – the sky is falling – then everyone’s going to believe it. They don’t have any dissenting voices and if their voice is mirroring that, where’s the debate? Where’s the conversation? The folks who are in technology are more comfortable with AI – with where it is, with where it’s going – where are those voices? Those voices should be heard and allowed to have that conversation. Found it interesting in the article. There are differences of opinion as to whether a general AI will be accomplished within a specific timeframe.

Lexy: Many of the respondents thought that it would be available by, I think it was 2028.

Marie: Yep.

Lexy: Whereas some experts had been predicting something like 2075 or 2061 or something like that. Where it’s far enough out that, yes, we do have to prepare for it, but we would have time to prepare for it. Versus this is going to happen in the next decade and we don’t have a lot of opportunity to prepare economically for the changes that are going to happen. If it truly is closer to 2075, is there as much a concern? is the general populace even hearing that message? I don’t think that that is the case.

Marie: Well, and I think there’s a distinction here between the general AI and AI that surpasses median human capabilities. And AI that surpasses most human capabilities. Where a lot of the concerns are, even today. So even if we never got to general AI, there’s enough machine learning algorithms, automation that there are definitely concerns that this article brings up that are still relevant today and people should be talking about.

Lexy: Yeah, even surpassing 50% of the population in terms of what it could do doesn’t mean that it will do the jobs of 50% of the population. In a future episode, we will have a review of a book called What to Do when Machines Do Everything, which I think has another really good take on this type of concern.

Marie: Absolutely. In addition to there being discussions about what might be automated, there’s still a lot of questions about what the future of work will look like and what people decide to do and how people incorporate this. There’s a lot of different factors that go into that. So as we do more quick takes, we’ll explore this topic further.

Lexy: Definitely.

Marie: The article at Vox closes with “public interest in the topic, nonetheless suggest that AI safety maybe starting to go mainstream.” Which bodes well for people listening to this podcast and being up to date and ahead of the curve on kind of an emerging topic.

Lexy: I look forward to having more listeners. And thank you to all of you for joining us on the Data Science Ethics podcast.

Marie: Thanks so much.

Lexy: Catch you next time.

We hope you’ve enjoyed listening to this episode of the Data Science Ethics podcast. If you have, please like and subscribe via your favorite podcast App. Also, please consider supporting us for just $5 per month. You can help us deliver more and better content.

Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.

This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.