Image Copyright: akz / 123RF Stock Photo

Deepfake

Episode 11: Deepfake – Show Notes

Deepfake is the process of using computer vision to generate realistic fake audio or video where a performance from one person appears to be coming from another. This technology, once the brainchild of academia, was taken into the corporate world to help make image and audio editing more efficient. But it also poses a strong risk of adversarial use.

Celebrities have been faked into adult entertainment videos. United States Presidents have been used making public service announcements to not trust everything you see online. And this is just the start.

As Deepfake algorithms are better known and more widely used, what might happen to our concept of video evidence? What ramifications are there for actions taken when even recordings of those actions could be pointed to as fake? What might we do to counter these possibilities?

Additional Links on Deepfake

The Era of Fake Video Begins Atlantic article describing the start of “deepfake” as a programmer who then democratized some of their work so others could use it. Powerful piece on the history of real video, its impact on perception, and what that means in this new world of fake video.

We’re Underestimating the Mind-Warping Effect of Fake Video Vox article with links to Jordan Peele’s Obama Deepfake PSA.

The Terrifying Future of Fake News BuzzFeed article discussing Aviv Ovadya’s “infocalypse” – the rise of misinformation and propaganda via fake news.

Vice News Episode on HBO Now covering Deepfake (starting at 19:41).

Adobe is Using AI to Catch Photoshopped Images Engadget article highlighting some current efforts to identify altered content.

Episode Transcript

View Episode Transcript

Marie: Welcome to the Data Science Ethics Podcast. You have Marie and Lexy here and today we are going to be talking about a new technology called Deepfake. We’re going to tie this in with our previous episode where we were talking about anticipating adversaries.

The technology of Deepfake works by basically taking an actor and their performance can be mapped onto the likeness of somebody else. An example of this was Jordan Peele actually doing a performance and then his performance was mapped onto an image of Barack Obama. It was an example where Barack Obama didn’t say those things but, because of Jordan Peele’s performance, it got mapped on to Obama.

When we talk about anticipating adversaries, there are a lot of conversations that this opens up.

Lexy: There are quite a lot of aspects to this. The original technology was built not specifically for the industries in which has been used. There have been a few specific, very high profile cases, some involving adult entertainment, some involving politics, where very famous people’s likenesses have been used with the performance of someone else. The original technology was not really meant to specifically do that but, because of the data available, there were simply more images available of famous people for them to train on and use as the person whose likeness would be shown.

The technology itself seems to have come from academia originally as a “hey, we can do this now. We can do all kinds of things where we don’t have to have the person physically perform these actions. We can just have somebody else do it.” Maybe think about it from the standpoint of making a movie where maybe you only have the high-ticket actor for a day or two and you have a stunt person for a few weeks doing stunts. Now you can map likeness of the actor on to the stunt person’s performance in a convincing way that doesn’t require the stunt person to put on a costume in the same kind of method that they’ve done in the past. It gives more ability to create a realistic image of the actor including having, potentially, their voice coming from the stunt person. The actor doesn’t actually have to be present.

That said there are a lot of cases now where this technology has already been used in ways it was not meant to be. One of the ways that this has already surfaced, as you mentioned, is that they’ve used President Barack Obama or they’ve used President Trump as the likeness. That presents a number of ethical issues, potentially. First is that you have a very high power person who appears to be saying or doing something that they wouldn’t otherwise. On the other hand, the fact that this ability exists means that if they did do something or say something, they could potentially turn around say “nope, that was a fake. We know that this technology is out there that can create these realistic fakes. That wasn’t actually me.”

There’s accountability on both sides of that in terms of whether the President or the famous person involved takes responsibility for what they actually did or said or potentially is framed for saying or doing something that they didn’t do. There are a lot of different ramifications for that. In the case of politics, the stakes are very high. The fact that we could potentially have a situation in which a truly false news article with video, with audio comes out of a President saying something that they didn’t say, and that circulates and is believed by other political actors, could create a massive problem.

Marie: Yeah. Some of the sources that were going to link to for this episode have already seen people basically stating similar things. It doesn’t even have to be that realistic. It just has to be convincing enough that somebody else would take action on it.

I think that’s the piece that is the most interesting when it comes to the ethics of this. There are going to be some people that say “well, we’ve been able to figure out and detect which videos have been produced by Deepfake.” Right now, for example those images or those clips don’t have blinking in them. Now somebody has been able to figure out, “okay if I run this algorithm, I can detect if it was produced by Deepfake or not because there was no blinking.” But as soon as somebody can add blinking into the algorithm, then somebody else has to develop a different way of detecting if it was truly an authentic piece of video or audio or if it was manipulated or produced using Deepfake or some of the other technologies that might be able to replicate somebody’s voice patterns.

So you’re going to have kind of this arms race similar to what we’ve seen for cyber technology where somebody comes up with the virus somebody comes up with a way to protect against that virus or that cyber security threat. We’re gonna have something similar when it comes to fake images and fake video and fake audio. Somebody is able to detect if it was real or not and then other people are going to figure out what that detection method is and evolve again.

Even though there are multiple reasons, market factors, that were driving academia to probably research this. Some of the other ones might be for the video game industry to make it easier to create more realistic, immersive experiences. Or it could be in film where somebody passes away but they wanted people to continue to use their likeness in movies. Or use an older version or a younger version of them and still be able to map their performance on to those different ages of a person. You know, there are some really great thing that this technology might allow for people to do. But I think this is a really good example of where people didn’t anticipate the adversaries and the impact that this can have in other arenas and other applications.

Lexy: This is a great example of “just because we can, doesn’t mean we should,” which is really what ethics is all about. It’s figuring out what should you do in the situation that you’re in. Not necessarily what are all of the options? What’s the best option given the circumstances and the consequences?

There are a couple of things that you talked about. One of which was mapping performances for people who have passed or using someone’s likeness. We have regulations we have laws protecting the use of likeness. What does that mean in an era where your likeness can simply be mapped on to anything?

It’s no longer the case that the paparazzi has to follow a celebrity around to get the shot that’s going to make them look bad in a tabloid. Now they can just mock it up. They’ve been doing this with Photoshop for ages. What’s interesting with Photoshop is that there is, in fact, a digital watermark. So you talked about being able to detect when something has been faked. A number of companies, including Adobe, have put in these digital watermarks to try to specify that something has been altered. The technology has been used to change whatever’s being seen. That said if it can be created once – and this watermark has to be added as a chunk of technology – it can be created again without that watermark.

In the case of, for example, the blinking – maybe you start having technologies where it’s not trying to mimic the entire face. It just mimics the mouth moving, which we’ve been able to do for quite a long time. In that case you’re in the arms race that you’re talking about. Somebody is going to try to make it difficult for people to get away with making fakes. Somebody else is going to make it easier because there’s a market for it. They have the technology. They know the technology is out there. Now you’ve got somebody who’s trying to make a better detection method to find the people who didn’t bother to watermark their images or their video. And now you’ve got these actors back and forth of “how we game the system?”

We’ve talked a little bit about gaming the system in other episodes like the Retail Equation episode. The more information you get about what it takes to pass through the system and game the system, the more you game and then the more the system has to change. There’s the constant reworking of an algorithm. It comes back to that data science process of having to rework your algorithm – having to follow it over time. In this case, you have to be really on top of it because the stakes do get high very fast when you’re talking about the world stage on which you are literally placing players.

Marie: I think the other pieces of this, and why it becomes so high-stakes so quickly, is that people, once they see something, especially if it’s their first time seeing it, that’s kind of what everything else gets compared to. This goes back to something that we’ve covered in a previous episode in terms of anchoring bias. That’s the whole idea that what somebody sees first is basically what they’re going to compare everything else back to.

Think about what we’ve seen happen in terms of social media and people sharing different news stories. Then that’s where they anchor all their other opinions back to. Even if those news stories haven’t been accurate and it’s been fake news, that’s still provided an anchoring bias for a lot of people to build other world views around. That was just with websites that were playing up information. How much stronger does that get when somebody has heard somebody say something? When somebody has seen somebody say something or seen somebody do something? And then you try to come back and say “No, that wasn’t actually that person doing that thing. That was a video that was created using this technology called Deepfake.” That’s going to be the type of landscape that we’re moving into.

Lexy: Tthe other part of that is that we then have to rely on the ethics of the media. There’s a lot of debate in today’s society of about what set of ethics dictates what is shown in the media. What worldview is shown? Having this type of technology, and especially where it can be so persistent so quickly in the world, means that the onus is really placed on media to validate, from a technical standpoint, that something is real. They’ve already, for the history of journalism, been trying to identify good sources that are credible, been trying to validate that something happened the way that it did and seek the truth and all of that.

This takes journalism and the ethics of journalism to a whole new level. They have to now employ rafters of machine learning specialists and AI technologists and so forth to try to reverse engineer video files to see if they’ve been altered or to see if they’ve been created using Deepfake. Now they’re the ones in control of much of that, depending on how prevalent this becomes – this technology – how available it is. You could have websites that are not held to journalistic integrity standards that could easily create anything they want. Make it seem like the person of their choice has said or done whatever it is they feel is important and post that out to the world.

Marie: Part of internet culture today embraces that. There’s this idea of remixing. Taking memes. Combining things. Even electronic music – a lot of it is taking things, sampling things, creating something new. There is part of our culture that is already doing that and could take this technology to create new memes. Could take this technology to take somebody and make it sound like somebody else was singing the lyrics. To make it sound like somebody else was saying something. You remix that into something. They’re almost part of us that have been practicing this skill. This is just going to be a new technology to apply and use.

It goes back to that question – even if we could, should we?

When I think about marketing, I also think about how this could be an issue for people that want to have a celebrity endorsement. That’s a huge issue. Once it’s out there, people could still have the perception that our product has a celebrity endorsement. Now that celebrity would have to go through the process of saying “no, I didn’t endorse it. That wasn’t an actual endorsement. That’s not what I actually said.” Again, it goes back that anchoring bias. If people have been exposed to the fake endorsement first, that means all the work that goes into disproving the fake endorsement really has to work ten times as hard.

Lexy: There have been so many precedents set online of people trying to get false information removed. Part of GDPR in Europe is the ability to get false information taken down from the internet about you. If you can prove that something is false, you can ask for whatever website it is to remove that content and you can actually get things delisted from search engines.

We don’t have that in the States. So let’s say there’s a celebrity endorsement that’s false. There’s a cease and desist. But it’s now been cached on sites all over the internet. It can’t easily be removed because there’s a copy of it here, there and everywhere. Even if the one site they found it on originally takes it down, it doesn’t mean that it disappears forever. It’s just that one site that it’s gone from. And so when you think about using this type of content online, it can become completely pervasive through the internet in very little time and there’s not an awful lot that can be done to remove it. At some point, someone’s going to see it again. At some point, somebody’s gonna believe it again.

Marie: Especially because there are times when companies that are active in that space… Even if one company basically gets shut down, they kind of re-brand, re-package, re-launch over here and end up doing similar things.

One area that gives me hope is the push-back that people have had against over-Photoshopped sites. You’ve seen campaigns where people have asked certain magazines to not use as many Photoshopped images. That could be, at least, one template that people could use moving forward. Say “hey, we want to make sure that – whatever media or whatever news site – we want to make sure that you’re doing live interviews. Or we want to be able to see more live events. We want to be able to see Facebook streaming live and validate that it’s the right person.” Maybe those are opportunities where people could do something that is more real. The challenge with this type of technology that that that’s such a real-time type of technology that even saying “this is Facebook Live. It’s the real person,” you might not still be able to validate.

Its a very complicated ethical type of technology just because of what it has enabled and what it will enable. I hope that people are able to come up with solutions to make sure we can still trust the sources and the media that we’re consuming.

Lexy: The only way I could see it working would really be in live events. You would literally have witnesses to what occurred who would then be able to verify that this is what happened. This is what was said, whatever it might be. However, the moment you see something on a screen, which is how we consume the vast majority of our media today, you can’t necessarily believe it anymore. Even if it was recorded at a live event with thousands and thousands of people, the moment it’s on a screen you don’t know if it’s been edited.

Marie: Unless we were able to use something like blockchain to then verify that the video from the live event was actually not edited. Then once it’s been added to the blockchain it can be verified multiple times. But you would need to have some way to make sure that only things that were from live events and validated from multiple people could then be entered into the blockchain as an authentic, verified piece of content.

Lexy: Blockchain is a technology that allows for a digital signature through the use of cryptography where multiple computers are simultaneously given the same keys. That code is essentially a distributed ledger. It’s the official record of the lineage of this piece of content. This is what’s in cryptocurrency – that is the technology behind it. We’ll have some more episodes on it in the future. But for those who are wondering, that’s what blockchain is all about.

The idea behind blockchain is that it can’t be corrupted because it’s distributed. Because it’s on so many systems, you can’t simultaneously hack or change all of them if you were to try to attack it. So it’s supposed to be safer for the distributed ledger versus having a single, central ledger as you would have, for example, at a bank. At a bank, you have an account. The bank knows exactly how much is in your account or what transactions have occurred. In this case, every computer connected to the network that has the blockchain technology has a copy of that ledger and therefore has the knowledge of what has happened in that account, what its current value is, and so forth.

Marie: Blockchain can be used to validate a specific type of art or content. There still has to be things to look at in terms of – how do you take content that’s developed out in the real world and then import into a system that can use blockchain to validate it? There are potential pieces that could be linked together.

Lexy: There is an awful lot here.

Marie: This is a big topic.

Lexy: This is a huge topic. And while our quick take has gone into a lot of areas of this, there are still a ton to be covered. This is just the start of a very large ethical issue surrounding machine learning, in this case deep learning, with the generation of content. I guarantee you this will not be our only episode where we talk about content generated by AI. Much more to come. And I think that we’ll probably start seeing more stories in the news on these types of issues coming up. Celebrities, politicians, and others saying, one way or the other – either they were framed by a Deepfake or something that people think they actually did was a Deepfake.

Marie: The implications that video could be made for example that only has to be good enough for somebody else to believe it and then take action, especially on a global scale. I think that’s where so many people are seeing the concerns in the national security and global security. It is going to be something that definitely comes up again because there isn’t an answer for it right now. The technology is only continuing to advance. Unless, for whatever reason, people realized the ethical implications of this and are able to put a moratorium on it. The technology’s already out there so that’s highly unlikely to happen.

This is, again, going back to our previous episode about anticipating adversaries. This is potentially an example that we’ll look back on and have a case study, as this moves forward, of a technology that was developed where people really didn’t anticipate the adversaries. And we’re going to see how that plays out in the future.

Lexy: Good luck to us all.

Marie: Good luck to us all.

Lexy: Thank you so much for joining us on the Data Science Ethics Podcast. This has been Lexy and Marie. We’ll catch you next time.

Marie: Thanks.

Lexy: I hope you’ve enjoyed listening to this episode of the Data Science Ethics Podcast. If you have please, like and subscribed via your favorite podcast app. Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.

This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.

0 0 votes
Article Rating
Tags: , , , ,