Removing bias from job descriptions is a tough job

Hired By Algorithm

Episode 5: Hired by Algorithm – Show Notes

Finding and applying for jobs is a painful task – made worse by hiring bias pervasive in the process. In today’s episode, we consider advances in helping to eliminate hiring bias from job description to interview. We call out a few of the biases lurking in HR and provide tools and insights to help combat them.

[0:31] Article synopsis, progress in the field, business imperative to be more diverse

[3:50] Challenges with bias, sources of bias in algorithms for hiring

[5:38] Selection as a type of hiring bias

[6:13] Self-selection as a type of hiring bias

[7:09] Un-biasing job descriptions

[10:51] Survivorship as a type of hiring bias

[12:01] Algorithmic resume screening as a cause of survivorship bias

[12:37] Similarity-Attraction theory as a type of hiring bias

[13:20] Standardizing the interview process and blind resume reviews to reduce bias from similarity attraction

[15:37] Caution! Algorithms can systematize existing hiring bias

[18:24] Final comments and quotes

This is a huge topic and one that merits much more than just one quick take. Be on the lookout for a deeper dive into data science ethics in addressing hiring bias in future episodes.

Additional Links on Hiring Bias

Silicon Valley is Stumped: Even AI Cannot Remove Bias from Hiring Main article referenced in the show. This discusses the challenges faced in trying to unbias the process.

Employers Tap Software to Improve Gender Diversity from SHRM describing the benefits of a diverse team, various players in the software space, and how to raise awareness.

TotalJobs Gender Bias Study Findings and insights from the TotalJobs study of over 77,000 postings in the UK.You can also use the Gender Bias Decoder on here to check for biased wording in job ads.

Teach Girls Bravery, Not Perfection TED talk describing the difference in women vs men applying to positions.

GitHub Repo to De-Bias Word Embeddings Data and code to flag and remove known, biased word embeddings from text.

Episode Transcript

View Episode Transcript

Lexy: Welcome to the Data Science Ethics Podcast. This is Lexy. I’m joined today by Marie Weber.

Today we’re going to be following up our last episode that talked about bias with one of our quick takes. This one is about Silicon Valley being stumped about how to remove bias from hiring algorithms. Marie, give us a synopsis.

Marie: This was an article that we found on CNBC. It was talking about how there are new algorithms that are being developed by different companies to help with the hiring process. Some of these are related to how can you take bias out of your selection process? We also have companies that are looking at how to help find more candidates. And how can you write job descriptions so that you take bias out of even the words that you use in a job description to make sure that you’re getting a qualified field that has more gender and ethnic diversity? The other thing that they were pointing out in this article is that one of the options is that, if you want to get bias out of the hiring system, that’s where algorithms can potentially help. But they also pointed out that if you don’t have the right data for those algorithms, you’re still gonna have the bias in the process. So it was a really interesting article that talked not only about algorithms and how algorithms are being put into the HR process, but then also the data science ethics of how you put together those algorithms and how you get the right data and the bias that might be involved.

The great news is that there has been some progress in this area. Companies are looking to have teams that are more diverse because it helps them be more effective as a business. It helps them potentially increase their sales because, when you have people that have different ways of thinking, you come up with better solutions. There’s a business imperative in terms of why companies want to do this. And so having algorithms that can help them be more efficient in finding the right talent and making sure they have diversity in their candidates can be really positive thing for businesses.

But they also want to make sure that the algorithms that they’re using are having a positive impact. That’s why it was really encouraging when we did some additional research and found that totaljobs actually was studying this and found that creating job ads using gender-neutral wording attracted 42% more responses than the potentially biased wording. That further reinforces the need to remove such biases because it really can help open up that job pool of applicants for an open position for a company.

By using some of these algorithms and basically making sure that there’s a more standard process that there’s a better job description you actually get a better candidate pool. The idea is that, by being less biased, you get more people that will be interested in applying. That provides the stronger candidate pool and that allows companies to do a better selection process when they need to fill different roles.

The article went through and talked about a couple different examples. One of the examples that they talked about in the article was that Unilever has actually seen their talent pool increase in diversity by 16% since working with some of the vendors that are in this space. The one referred to in the article specifically is a company called HireVue.

The great thing is that there have been advances in terms of what companies have been able to do in terms of increasing diversity and creating a more diverse talent pool. That’s when it goes well. That’s the benefits of it. But we also know there are challenges because it’s bias. It’s already a biased process when it’s a human process. When we bring in algorithms, there’s still the possibility that it’ll be biased.

Lexy: Yeah. That’s an interesting one as we follow up our data science process episode, which is episode 2, that talked about how the data gathering stage and understanding the bias is in the data that you’ve gathered can have implications throughout your algorithm. And so when you come to the point where you’re using the algorithm, understanding what biases were in the data to start with is incredibly critical to understanding the outcomes you’ve now gotten.

The article talks a lot about using data from historical information to identify whether something is biased or not biased. The problem with training an algorithm on historical information is that you’re going to tend to repeat the past. If you say “well, what we did in the past was good” then all the same biases that were in that data are going to repeat.

Marie: And in the article there was actually an interesting quote that said “if we train the algorithms on historical data, to a large extent, we’re setting ourselves up to merely repeat the past. So we need to do more and we need to make sure that we’re really examining the bias that’s embedded in the data.” That wasn’t a direct quote but that was the essence of a quote from Cathy O’Neil, who is somebody that is now a consultant in this area and helps to audit some of these different algorithms.

Lexy: Which means that she’s not an unbiased source of information either, but that’s a separate topic.

What’s interesting there is we’re seeing a few different levels of bias. One is selection bias.

Selection bias is when you don’t adequately randomize to get a sample of the entire population and so you have a slanted view essentially of what types of people would be in the population. As an example here, we might see, for instance, that if a job description was only posted on paid sites like The Ladders or something like that, we already have a selection bias in that the population that that job description is going to reach is only a subset of people – those who are willing and able to pay for a job site.

In addition to that there’s a self selection bias. This one’s an interesting one because it speaks to the job description itself and the fact that there are certain words that are seen as causing bias in self selection.

Self selection bias is when you expect that the respondents, or the applicants, are going to self-select into your sample. They’re going to be the ones to specify that they’re included in this selection. What’s interesting there is that, in hiring practices, you rely on applicants to apply. Or you have recruiters making outbound recruitment efforts, but you still have some amount of self-selection in who responds to those outbound communications. And so there’s absolutely a part of the hiring process that has a self-selection bias and that is a huge part of what we’ve seen in this article and others. This is part of the same types of bias that you see in the job descriptions.

In the job descriptions, we see words that are not necessarily gendered but that, because of societal expectations, they are taken as gendered. So, for example, we were looking at a couple of other articles where it talked about what types of words were actually being flagged as biasing. It was things like “hero” or “confident” or “best” or “top” were seen as biasing towards men. Whereas “responsible” or “supportive” or… I can’t remember. There were a couple other ones. They were more biasing towards women.

Marie: Right. “Collaborative” was more towards women but “high achiever” was more towards male roles. So there can even be expectations on a gender. So some of these terms could have a bias that leans towards one gender the other. Or, what was really interesting, is that some of these terms can also have a bias towards one ethnicity or another.

Lexy: Yeah. I will say that the one I took umbrage with was that they said “analytical” was skewed towards male and I personally disagree.

Marie: But of course.

Lexy: Of course.

Marie: Of course.

Lexy: However, they did talk about the fact that any society in which you are not supposed to be a braggart – where you’re you’re not supposed to boast about your skills – would dissuade someone from applying to a position that would use terms like the “top performer” or “the best performer” or whatever those types of superlative words might be. And so any of those cultures, or any other kind of subset of a culture, even in American culture where there are areas where some people are expected to be more humble versus others. There may be religious or gender or ethnic considerations with regard to how those descriptions then interplay with whether someone will or will not apply.

Marie: I think it’s even worth mentioning that how companies have been putting together job descriptions has even been updated a little bit over the past few years. When they talk about a job description it lays out what the needed requirements are that a candidate must have and then what would be the nice-to-haves. Even as we were doing some research for this podcast we noticed that that was referred to has one of the best practices because there are also applicants that, if they look at a job position and they don’t feel like they meet 100% of the criteria, they won’t apply. Whereas if you’re able to specify “this is needed experience, this is needed skills” versus “these are the like-to-haves” you’re, again, potentially going to open up that applicant pool and get more of those qualified individuals that would be looking for. They won’t self select themselves out of the process.

Lexy: That’s an interesting one. There was a TED Talk about that specifically with regard to women. The fact that women tend to apply to things only when they feel they are 100% fit for the role – that they have 100% of the skills that are required. And they view all of the skills on that list has required.

The same is not true of men. In the TED Talk, they mentioned that it was 60% of the skills – if a man felt that he had 60% of the skills, he would apply. It’s a very interesting one and when we think about, again, how do you position that in a job description? How do you unbiased those types of things to allow people to say “yes, I am a good applicant. I am going to apply. I will self select into this.” That’s a huge part.

Another bias that is part of this is the survivorship bias. Survivorship is when you look at only those observations that are still “alive.” In this case, you might only look at employees who are still employees and who are seen to be good hires. So if you look, for example, at your top performers – you are saying this is the example of success. The problem there, of course, is that if you look for more of those examples of success, or those who are likely to be like those examples of success, you are re-emphasizing that same bias. You’re not taking into account those people who maybe had been hired but for whatever reason – maybe there wasn’t cultural misfit or maybe it was another opportunity came up or what have you – they’re no longer with the firm, you’re already removing those as possibilities. Or if you don’t even look at the people who applied but were not accepted, or who got further in the process but they simply lost out, or they had to cancel out of the process on their own… All of those things could be a part of what could be good employees, top performers, good examples, that are not being excluded.

Marie: There already systems in place is that have become very efficient at screening resumes and evaluating personality tests. So much so that 72% of resumes don’t even get to a human to review. That means that only 28% is actually getting to an HR manager and that’s where they’re spending their time. That’s where they’re focused on saying “okay, I’m going to review this 28% of resumes that’s already gone through this filter.” When you think about that type of scale, again, you want to make sure that you’re considering the biases that might already be in the system.

Lexy: Another bias of the hiring process – and this one is a little bit later – is the similarity-attraction theory. This theory states that we tend to like people, or prefer to interact with people, who are like us. This one we would tend to see in doing a face-to-face interview. There are specific interview coaches who will tell you to act like the person you are trying to interview with.

Marie: Oh sure. Mimic them. Do the same types of hand gestures or head movement that they do. For sure.

Lexy: Absolutely. And that is all based on the similarity-attraction theory. Meaning that if you do those things, you are more likely to be likable to them.

Marie: Some of the other things that are being looked at across the HR industry are using things like a sample test or having somebody produce a sample piece of works that way their work can be evaluated and also standardizing interview questions. When it comes to standardizing interviews and the interview questions, that’s another place where data science can come in. Because then you have information and content that’s more comparable across multiple candidates versus a very subjective interview where you just having a conversation seeing how it goes and what the personality fit is which is much more subjective.

There’s also been a move and a trend towards doing blind resume reviews. And, Lexy, maybe this is something where you can talk about how there are certain pieces of data that can be removed to take out bias in the review process.

Lexy: Often what you’ll find when somebody says it’s a blind review is that they’ll remove the name and the address of the person. They can’t really remove what the experience of the person was or their education levels. Obviously, those are very impactful. However, some of the things that you might not remove, but that could be a proxy and still introduce bias, are things like where those positions were or what school someone had attended. As an example, if you look at a resume and the university is in another country, you might immediately assume something about that candidate based on the fact that they attended a foreign university.

Marie: Right. And there are people that maybe even are from a different country of origin but still studied at

Lexy: Yes – still studied abroad or whatever. Yeah.

Marie: There are some people that are putting together resumes that have their picture on it. Obviously, that’s an area where, to do a blind review, you would remove the image. You would make sure there’s no name because there’ve been studies that show that if you remove the name that you tend to get more consideration of candidates that would be of a minority gender in that role or of a minority ethnicity in that role. They would get higher consideration.

Lexy: We’ll talk more about this when we think about data privacy and what it takes to truly anonymize data. There are a lot of factors that would need to be removed to truly make a blind resume review.

Marie: One of the things that this article is bringing to light is that these are all biases that are currently part of the HR process. When we’re looking at algorithms to help improve the HR process, we need to be very aware of these biases because otherwise they’re going to become part of the algorithms that are used by the industry.

Lexy: There’s a possibility that the fact that there’s an algorithm there will just systematize the bias that was already underlying there. A lot of that has to do with, again, the data that’s put into the algorithm. The more we can identify data biases and try to adjust that data to make it less biased so that the algorithm can learn to be less biased or the more a software system can adjust, the better. It might be able to say “here’s where the biases are. Here’s where the data is showing clear over-indexes,” for example, “in different areas, or clearly under-representation of certain groups. We want to specifically adjust that. Let’s keep it as a flag or a trigger so that, if it’s still present, it alerts us to say this is still a problem. We still need to work on this.” That has to involve humans though. It has to have that additional human-in-the-loop kind of review to say “yup, I still see it. Wow do we adjust for this? How do we address it?”

Marie: So as we talk about the data science ethics in relationship to HR, I think the idea of making sure that people are aware of the biases that might be involved and how to address them is important. Because the big thing is that you wouldn’t want to take something like HR, which is really looking for ways to scale and be more efficient in terms of finding the right applicants and matching them to the right position, and not have the ability to be as fair and as just and equitable is possible.

Lexy: Absolutely. This is something that has ties to equal opportunity, at least in the essence of what equal opportunity tries to be. You should not be discriminated against for being part of a protected class. An algorithm can just as easily discriminate by making a decision up front that influences whether or not an HR person or recruiter ever sees that applicant – never mind if that person is hired – are they even considered? And so we need to make sure that as these algorithms are used more commonly that they are enforcing the spirit of the law, essentially, in equal opportunity employment and making sure that there’s a fair representation of the talent pool that’s available.

Marie: Lexy, is there anything else you want to add as we think about the overall data science ethics as it applies to HR and job descriptions and things like that?

Lexy: There’s been a lot of progress. There’s still clearly a lot to do. I think it will be a very interesting area to continue to study.

As a final quote from the article, I will leave you with this. “AI can work as long as the input data is accurate.”

Marie: Absolutely!

In addition to the main article that we talked about, we’ll also include some links to some other resources and research that we were able to find down in the show notes below.

Lexy: There are companies who are making data available via github and other locations so that you can look at some of this information and make your own assessments and determine whether their algorithms are fair and unbiased. If you’re interested in doing that, we will provide those in the show notes also.

Thanks very much for joining us. See you next time.

Marie: See ya!

I hope you’ve enjoyed listening to this episode of the Data Science Ethics Podcast. If you have please, like and subscribed via your favorite podcast app. Join in the conversation at datascienceethics.com, or on Facebook and Twitter at @DSEthics where we’re discussing model behavior. See you next time.

This podcast is copyright Alexis Kassan. All rights reserved. Music for this podcast is by DJ Shahmoney. Find him on Soundcloud or YouTube as DJShahMoneyBeatz.

0 0 votes
Article Rating
Tags: , , , , , , , ,