Podcast: Play in new window | Download
Episode 7: Anchoring Bias – Show Notes
Today we are talking about one aspect of why first impressions are so important – anchoring bias.
Anchoring bias is when the first piece of information we receive about something (our first impression of ir) is weighted more heavily than it should be. Unlike statistical sampling biases, anchoring bias is a cognitive bias – an unconscious issue with the way people think and process information. Cognitive biases, including anchoring bias, can cause data scientists to weigh data’s importance or relevance unduly or to follow up on only a subset of potential hypotheses. Anchoring can shape not only the data used but also the questions asked and answered by it. For this reason, it may be even more insidious than any of the statistical biases we’ve covered to date.
Additional Links on Anchoring Bias
Fake Sales Trick Customers at Major Stores NBC article regarding inflating prices to then lower them, thereby setting an anchor for customers.
Woman Claims Doctors Ignored Cancer Symptoms because of Her Weight ABC article regarding a woman whose cancer went untreated as doctors presumed symptoms were related to weight.
Why Do Obese Patients Get Worse Care? New York Times article discussing cases and studies of doctors and their attitudes towards treating obese patients. Example: “Research has shown that doctors may spend less time with obese patients and fail to refer them for diagnostic tests.”
Overcoming Anchoring Bias Schwab article with some ways to avoid succumbing to anchoring bias. This article focuses on investing and spending but the advice works in other situations as well.
Episode TranscriptView Episode Transcript
In episode 4 we dove deeply into statistical sampling biases and how these could impact the data used in data science. Today, we will flip to the other class of bias – cognitive bias.
Cognitive biases are unconscious flaws in thought patterns which influence the decision-making process. They’re often a sort of shortcut that allows us to make decisions more quickly without having to go through the onerous process of weighing out all of our options objectively. There are literally dozens of cognitive biases, each with different ways to skew data, hypothesis development, feature engineering – everything.
In this episode, we are going to talk about one specific cognitive bias that, I’m sad to say, we may all be guilty of. It’s called anchoring bias and it can be super-sneaky.
Anchoring bias is when someone relies too heavily on the first piece of information that they receive about something. It sets a sort of benchmark, even if it is an erroneous one, against which further information is compared.
As an example, suppose you go to an electronics store that is advertising a big sale. “Everything at least 30% off!” You find a cell phone that shows a sticker with a price of $600 that has a slash through it. The new, sale price is $400. You’re getting a 33% discount by purchasing during the sale, right?
Well… maybe. It depends on whether that $600 original price was true. You were signaled to look for big discounts – that all items would be at least 30% off. But off of what? If the original price of the phone was truly $450, you didn’t save nearly as much as you thought. The store could have simply re-labeled the item as $600, causing you to anchor to that number as a starting point, even if they never sold it for that much. The created the anchoring bias by showing you a higher price first, then stating that they reduced it.
That’s the same principal infomercials have used for ages. When you don’t have a context for how much something should cost, they tell you. And then they tell you that you need to act now to save off that baseline they just set out of nowhere.
Another area where people claim that anchoring bias has affected outcomes is in medical diagnoses. When visiting a doctor, the first information they receive is often a glance at the patient. They do a first assessment right there, gathering some data about what they observe of your body composition, movements, general look, and so forth.
They might then also receive your vital statistics, collected by a helpful nurse or physician’s assistant. These statistics include your pulse, blood pressure, temperature, height, and weight. From your height and weight, they calculate a body mass index, or BMI. The BMI is an indication of whether you are at a healthy weight for your height. For what it’s worth, there is a very heated debate about the merits of this metric. A BMI between 25 and 29.9 puts you in the “overweight” category. Anything 30 or above is considered obese. If, in that first glance, the doctor already assessed you as overweight, the BMI might confirm that diagnosis. The recommendation, of course, is to lose weight.
Several cases have come out where patients assert that doctors ignored their symptoms of larger issues because they were diagnosed as obese. Some people who eventually found out that they had a lymphatic disorder or even cancer were told to simply lose weight and they would feel better.
Could it be that doctors focus too much on the first information they get to provide a truly objective diagnosis? Sure. Doctors are people too and they are just as likely to succumb to anchoring bias.
This bias has some interesting implications for data science. It can influence the data collected, especially when a person is responsible for creating it. This is the case in medical diagnosis codes. It can sway the questions asked of data scientists, or the hypotheses shared by subject matter experts. Anchoring bias can also come in during data analysis as data scientists might latch on to some theory early that causes them to focus in one direction, therefore possibly ignoring alternatives. All in all, it can lead to both bias and erroneous conclusions.
There are a few good ways to guard against anchoring bias. The first is to be aware of it and call it out on the carpet. Identify where someone has created an anchor and immediately disbelieve. Do your research and identify if that anchor is accurate.
Second, get more perspectives. Diversity is a great way to introduce additional opinions and hypotheses so that you don’t overly focus on a single path. Listen and be willing to try out different methods to see what works.
Last, and probably most difficult, is to be alright with being wrong. It’s okay if your first instinct was incorrect. Our gut instincts are not always accurate, despite the anecdotes. Seek out the truth, even if it disagrees with what you first saw. You’re likely to get to a more ethical place that way.
In our next episode, we will cover an interesting tale of a company knowingly testing the influence of anchoring bias in the field of love. Can anchoring bias really impact our heart’s desire? Find out next time.
I hope you’ve enjoyed listening to this episode of the Data Science Ethics Podcast. If you have please, like and subscribed via your favorite podcast app. Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.
This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.
Hypothesis: The stigma against discussing salaries exists because companies rely HEAVILY on anchoring bias to distribute unequal wages. Is your wage fair? Surely it is, because it’s the only number you’ve seen. That fallacy is obliterated the moment employees begin to compare numbers. Ergo, we are thoroughly influenced to feel shame and embarrassment to avoid discussions about pay. That stigma benefits only the firm itself to keep your pay unfairly low.
My opinion? If a coworker seems interested to know what you earn: Tell them. Frankly. Just don’t agree to loan any money.