How to Reduce the Effects of AI Bias in Hiring

Artificial intelligence (AI) can streamline the hiring process, making it easier for recruiting teams to acquire new talent. While AI can support better decision making and reduced hiring bias, it actually comes equipped with the same discriminations as the people who created it.

So how can companies overcome this challenge? 

What is AI bias?

There are numerous biases in the hiring, management and firing processes. Many are unconscious or subtle — for example, hiring slightly fewer women overall or letting go of older employees too soon. 

About 50% of people think their race, gender or ethnicity has made it harder to land a job. Some companies have implemented AI in their talent acquisition functions to help make decisions without factoring in these protected classes. 

The issue is that AI doesn’t work that way. It’s only as good as the data set programmers use to train it, and any errors or inherent biases will be reflected in the AI’s output.

These aren’t emotional biases, but rather programming errors that lead to unwanted outcomes. Several common problems create biases in AI. 

Data may reflect hidden societal biases

Looking up the word “beautiful” on Google reveals mostly photos of white women. The algorithm used a specific training set that contained these types of images.

The search engine doesn’t have racial preferences, but the samples from which it draws its results were made by people who did. 

Algorithms can influence their own data

An algorithm can influence the data it receives, creating a positive feedback loop. As photos rise to a search engine’s front page, more people click on them, creating a positive feedback loop where the algorithm suggests them even more. AI can magnify its own biases. 

Not everything is quantifiable

It’s hard to quantify certain features when creating training data.

For example, how do programmers quantify good writing? Writing assistance software often looks for proper grammar, correctly spelled words and sentence length. But, it has trouble detecting nuances of human speech, such as rhyming and idioms. 

People can manipulate training sets

Bad actors can purposely corrupt training data.

Tay, an artificial intelligence chatbot released by Microsoft through Twitter in 2016, was only online for a few hours before people taught it to post inflammatory content.

The tool spewed violent, racist and sexist misinformation, and Twitter was forced to take it down a mere 16 hours after its launch. Open-source or public AI often falls victim to this issue.

Unbalanced data affects the output

Data scientists use the phrase “garbage in, garbage out” to explain that flawed input data produces flawed outputs. Programmers may inadvertently train AI on information that doesn’t have the same distributions as in real life.

For example, facial recognition software often has trouble recognizing faces in persons of color because the original training set mostly contained photos of white people. 

Data sets can also contain correlated features the artificial intelligence unintentionally associates with a specific prediction or hidden category. 

For example, suppose programmers don’t give AI a sample containing female truck drivers.

In that case, the software will automatically link the “male” and “truck driver” categories together by process of exclusion. It then creates a bias against women and may conclude they should not be hired as truck drivers based on previous patterns. 

Why AI bias is a challenge in hiring

Talent acquisition teams are committed to treating candidates fairly in the hiring process.

But, with significant workloads, many teams have turned to AI and automation software to help them sort through many resumes or job applications. 

Before COVID-19, the average job opening received 250 applications. Yet, applicant flow for many roles has risen. Some entry-level jobs have even received thousands of candidates.

Many hiring teams use AI programs, but this software must be unbiased. It can mean the difference between automatically discarding an application and hiring the most qualified candidate. 

The AI recruitment industry is worth over $500 million, and TA teams use it for everything from predicting job performance to assessing facial expressions during a video interview. 

However, many applicants report these types of software rejecting them based on having foreign-sounding names or including certain words in their resumes.

People’s names and word choices aren’t a protected class, but they can often indicate race, gender or age.

In 2018, Amazon had to scrap a recruiting tool that automatically penalized resumes that included the word “women’s,” as in “women’s studies” or “women’s university.”

That’s despite the fact that companies in the top quartile for gender diversity are 25% more likely to make above-average profits than companies in the lowest quartile. 

Reducing the effects of AI bias in hiring

How can well-meaning talent acquisition teams avoid these types of bias when using artificial intelligence in their hiring process? Here are some best practices.

Double-check AI predictions

First, it’s important not to take AI predictions at face value. Algorithms do their best to make good forecasts, but can get it wrong. 

Someone should review AI suggestions to accept, veto or examine them further. One body of research suggested a 50% chance of AI automating all jobs within 120 years, but it failed to account for nuances like checking for bias. 

Report biases immediately

Recruiting teams should report any biases they notice in AI software. Programmers can often patch the AI to correct it.

Seek transparency

Programmers should strive to provide transparency in their AI algorithms. In other words, they should allow users to see which types of data the software was trained on.

This process can be challenging because of hidden, hard-to-interpret layers, but it’s still better than hiding the information altogether. Talent acquisition teams should specifically look for transparent AI software. 

Get different perspectives

Having a sociologist or psychologist on the team is valuable when leveraging new AI software. They may notice biases in training sets and offer advice on correcting them. 

Ask questions

Programmers should perform a few final checks before releasing new AI software to the public. Does the data match the overall goals? Does the AI include the right features? Is the sample size large enough, and did it contain any biases? 

There may eventually be a standardized process to vet new AI software before launching it. Until then, programmers must double-check their work. 

Improve diversity, equity and inclusion

Almost 50% of recruiters say job seekers are inquiring about diversity and inclusion more than they did in 2021. Companies should seek to create a culture of diversity, equity and inclusion (DEI) beyond just improving their AI use.

For example, 43% of businesses said they were removing bias from the workplace by eliminating discriminatory language from their job listings. 

Look to create balance

Artificial intelligence is simply a tool that does what it was designed to do. Training it with biased data leads to skewed — and potentially harmful — results. 

Recruiting teams must scrutinize any software for hiring new employees. Above all, it’s always best to have a real person make the final decisions — because if a company wants to hire human beings, it should treat them as such.

Looking for better recruiting tech to help you boost the diversity of your talent pool and minimize bias in your hiring process? Learn how LeverTRM can aid your DEI efforts.

lever levertrm ats crm talent acquisition suite demo

Further reading