Podcast: Play in new window | Download
Subscribe: RSS
Episode 10: Anticipate Adversaries – Show Notes
Adversaries to an algorithm or system can come in many guises and at many times in the data science process. Their contexts vary from well-intentioned to nefarious. In this episode, we talk about different types of adversaries and how to anticipate adversaries.
Well-Intentioned Adversaries
Business users who have either asked for a data science project or who are in charge of making process changes based on results. They have a specific set of goals in mind based upon what the data tells them. This can cause them to put blinders on in thinking through how the system could be abused. These folks generally do not mean to be adversarial but fail to fully consider context in the changes they implement.
Solution: Help them see the ways that the system could be biased, how it might be abused, or in what situations it could have negative impacts to target populations.
Min-Maxing Adversaries
These people game the system. They figure out what it’s doing and exploit it to their advantage. While most are not breaking laws, they may be violating terms of service.
Solution: Set up systems to catch min-max’ers and warn them to curb unwanted behaviors. Ban them from using the system. Change the rules such that min-maxing is either not possible, or severely limited.
Nefarious Actors
People with specific intention to bring harm, break laws, or violate rights. This includes hackers, terrorists, and other malicious groups who infiltrate a system for illicit purpose.
Solution: Take all precautions possible to protect privacy and algorithm security. Consider whether someone, with the information made available by your system as intended, could use it to cause significant damage. If so, pull the system or algorithm from use.
Additional Links on Adversaries in Data Science
Dealing with Min-Maxing An article from RollFantasy that describes the issue with min-maxing and provides ways to deal with it in role-playing games. Some of these strategies could apply in other systems where this type of behavior appears.
Some Uber Drivers Are Reportedly Scamming Passengers with Fake Vomit Vice article with more information on Uber drivers gaming the system meant to reimburse drivers for cleaning costs and missed hours.
People Are Calling for Figure Skating Rules to Change After Olympics Drama Good Housekeeping article detailing some of the controversy over changes to the figure skating scoring rules.
Episode Transcript
View Episode TranscriptMarie: Welcome to the Data Science Ethics Podcast. This is Marie and I am joined with Lexy. We are going to be talking today about anticipating adversaries.
In our last informational, we were talking about context and one of the other ways you can think about context is how other people might utilize an algorithm once you’ve developed it. So Lexy, can you talk a little bit about anticipating adversaries and how that plays into the data science process?
Lexy: Adversary doesn’t necessarily mean a hacker. It doesn’t necessarily mean someone who is going to violate a law or a privacy policy or something like that. It could simply be someone who uses a system in a way that it was not intended to be used. It could be someone who sees the system and uses it to their advantage.
I often think about adversaries and the way that I can incorporate adversarial thought as to how do I break this system? Or how would I get around this system? It’s often, I find, helpful to take a thought on this to friends of mine who think – I put it in gaming terms because I’m big nerd – think like a rogue. So think like a thief or someone who is sneaky who would have a different perspective on what could be done.
Marie: For anticipating adversaries, you bring up a couple of good points because it means that different adversaries could have different contexts. So you can have adversaries that are internal. You can have adversaries that are external. You could have adversaries that are end users of the algorithm. You could have adversaries that are in charge of the care and feeding of the algorithm. So I think that’s a really good way to think about it in that it’s not just somebody outside that might be trying to be an adversary. It could be somebody internal. And some of the other systems that we’re surrounded by in terms of algorithms, you can think about how people have tried to game them and use that to be better in how you design it.
Gaming comes in to play here because I think when somebody looks at gaming and how they want to best perform in the game they want to say “OK, how can I put in as little effort possible and get as much gain?”
Lexy: That’s called min-maxing.
Marie: That is called min-maxing.
You’ve seen people, for example, on Uber try to figure out how to be most efficient. And min-maxing isn’t necessarily a bad thing, but it’s how can people make the most money on the system with the fewest hours. So people will try to drive more when there’s surge pricing. People will try to drive more when there’s higher demand. People will look at splitting up their hours. So instead of doing a normal 9 to 5 shift, maybe they’ll drive for a few hours here and then drive for a few hours there because they know when the demand is. And so they’re trying to match their driving times to when they can make the most money on the platform. That’s not necessarily an adversary, but it’s somebody that’s looking to make the best use of their time on the system.
Lexy: On the flip side, there have been recent articles stating that Uber drivers have been reporting that people had thrown up in their vehicles and providing photographic evidence of it, even when the person had not done so, so that they could get a cleaning fee for their vehicle.
There are nefarious actors who, in order to maximize their profit, are trying to use the system in a way that it wasn’t originally designed for. The original intent was to ensure that the drivers would be compensated for having to clean their vehicle and for having hours missed from driving that would be taken because they had to clean the vehicle. But because of these kind of bad apples, you see riders who are now upset because they were charged sometimes hundreds of dollars for a cleaning fee that was never there responsibility.
Marie: So that’s a good example.
I think there are even examples in the real world where when you make one change, and you think that change will help make the system better, it could have other impacts. When you anticipate adversaries, you really need to think about the different ways that, if you make one change, other people might react to it.
This is going to sound like an odd example to bring into data science ethics, but it makes me think of the last winter Olympics. They were talking about some of the rules changes that they made for figure skating and how that changed the sport. People were approaching it in a much more mathematical and technical way versus of more of an artistic way. There’s a lot of conversation back and forth about – is that a positive for the sport? Is that negative for the sport? After the winter Olympics, they announced they were going to be changing the ways that they did the scoring so it would put more emphasis back on the artistic part of the sport versus just the physical performance aspect of it.
Those are the types of things where when you change one thing, somebody can look at it and say, “OK, this is how I can maximize the system. And that’s how I can get the most points. And so this is how I can reach my goal – by doing x, y, and z.” So you have to be able to anticipate that any time that you change your system and tweak your algorithm.
Marie: A good way for people in data science to think about adversaries is potentially just to think about how people would game a system. And, absolutely, there can be adversaries outside of your system that you also want to consider. But the people that use it on day to day basis can also potentially be the adversaries that you need to be thinking of as you’re implementing your design.
Lexy: As awful as it sounds, there’s a certain amount of that that has to be considered even when approaching a business problem. So, for example, if you have someone within an organization or you have a client who comes to you as a scientist and asks you to create an algorithm for something they may have, not necessarily a nefarious motive but a self interested motive for why they’re asking for something. It may or may not be good in a larger context, right? Because of the way that they’re going to use this in the end, they maybe adversarial to the people whom they would use it on. And so think about not just people who are attacking or people who are threats to the group you’re in or whatever it might be, but also think about the motivations.
There’s a lot of psychology in this that we don’t often, as data scientists, put a lot of emphasis on. That’s really what this is all about – understanding more of the psychology that goes into what happens with an algorithm. Why things are the way they are? Why they are done the way that they are? What should they do?
That said, there are absolutely people who will try to gain access to a system. They will try to gain access to data. There absolutely are. So take necessary precautions knowing that attacks are possible in terms of data attacks. We’ve seen so many data breaches and hacks and so forth. Understand that the more you can lock down the system, the more you put appropriate controls in place, the better off you’re going to be.
Marie: Then also realizing that there is the opportunity that if somebody got access to your system, or if they know of your system and they try to design a system similar in their own field or in their own company in the same field, that they could have different intentions. And so the algorithm that you produce could inspire other algorithms that maybe are used for different means. What does that look like? It’s almost thinking about where things can lead two or three steps down the road, not just what you’re dealing with today in front of you.
Lexy: I’ve often joked in my career that we become what we think of as psychic analysts. That we have to think a few steps ahead. It’s really being strategic more than a psychic. A lot of that strategy has to be informed by the possibilities. The possibilities include facing adversaries in different ways. The way that someone is going to utilize your work and how that’s going to impact the world.
In our next episode, we’ll talk about how image recognition and image creation is being used both for good and not so good.
Marie: Thank you for joining us for the data science ethics podcast, we will see you next time.
Lexy: Thanks much.
Lexy: I hope you’ve enjoyed listening to this episode of the Data Science Ethics Podcast. If you have please, like and subscribed via your favorite podcast app. Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.
This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.