The Problem of AI Tech in Talent Acquisition

 

Automated technologies are increasingly popular among the human resources set, and if you are one of the HR professionals who’ve used it, it’s very likely it was for talent acquisition.

As of 2022, about a quarter of employers are using automation and/or artificial intelligence (AI) for HR-related decisions, according to SHRM, and 79% of them use those tools for recruitment and hiring, by far the most popular application. 

Its popularity makes sense. Such automation helps recruiters screen applicants, narrow applicant pools, and hire new workers quickly—which is especially appealing given the considerable pressure on talent acquisition to fill open jobs. Using tech for previously human-powered tasks can also make people feel more confident about their decisions.

The problem is that data-backed decisions aren’t necessarily better decisions. In our rush to make work more efficient, we’ve introduced new problems, and employers need to step back and evaluate how recruitment automation is being used.

The White House and EEOC raise red flags regarding TA tech 

AI in talent acquisition has been getting a lot of attention from academics, think tanks, advocacy groups, and, now, the federal government. In May of last year, both the EEOC and the Department of Justice released statements warning that AI and algorithms used in hiring can introduce new problems.

According to the EEOC, tools used “in an attempt to save time and effort, increase objectivity, or decrease bias” can actually “disadvantage job applicants and employees with disabilities,” and violate the Americans with Disabilities Act (ADA). The statement specifically names “automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.”

In October 2022, the White House Office of Science and Technology Policy released the “AI Bill of Rights,” a list of recommendations for mitigating bias in AI tools used in employment decisions.

The think tank New America has also noted problems of racial and gender discrimination perpetuated by flawed technology. In 2021, Dawn Zapata reported for Reuters on evidence of anti-Black bias in AI tools used for recruiting. She writes, “too often, the biases that professionals from minority groups experience in the real world are replicated in the AI-enabled algorithms used in training and recruiting.”

For example, an applicant-evaluation tool trained on data that describes a largely white male population could inadvertently deprioritize resumes that contain language that indicates a different gender or race. Simple application screening is popular in recruiting tech. The SHRM report I mentioned noted that 64% of HR professionals say the tools they use filter out unqualified applicants automatically. Just think of all the articles that offer advice on how to beat an ATS.

Employers build unwieldy HR tech stacks

In addition to introducing new forms of bias (or failing to mitigate human bias), many HR professionals are just totally teched out. Companies threw a lot of money at new workplace apps during pandemic lockdowns, and many of them haven’t cleaned out the junk drawer. 

Employers that overload their teams with tech risk losing messages, tasks, and problems in the noise, general frustration from constantly toggling between apps, disorganized information, plus tech stacks that atrophy from lack of use and rack up big, pointless bills. 

Before you add something new, take a look at what you’ve already got.

But we know that most employers want to do the right thing

The use of automated recruiting tools is almost always well-intentioned. Most employers use them to save their talent acquisition teams time and resources, many use them with the goal of reducing bias in hiring—for example, background checkers that don’t automatically disqualify candidates, but consider records in relation to job. And some tools actually do a pretty good job.

According to the SHRM report, most employers are getting their HR support tools from SaaS providers, and 46% want to see “more information or resources on how to identify potential bias when using these tools.”

Some TA professionals just don’t want to create more distance between their teams and the talent pool. Among the companies that don’t use recruiting AI, 35% say it’s because they “lack the human touch.”

The relationship between human decisions and algorithmic decisions may be what needs reconciling. Back in December, I interviewed Kirsten Martin, who teaches the ethics of business analytics at the University of Notre Dame, for a story I reported on people analytics in HR. She told me that when using AI-backed tools for employment decisions, the normal rules apply, and employers are still obligated to justify that their decisions are legal. 

She recommended this litmus test: If that algorithm hadn’t recommended that decision, would you still make it?


Emily McCrary-Ruiz-Esparza is a freelance reporter based in Richmond, VA, who covers the future of work and women’s experience in the workplace. Her work has appeared in the Washington Post, Fast Company, Quartz at Work, and Digiday’s Worklife.news, among others.

ABOUT UNCUBED STUDIOS

Launched in 2016, Uncubed Studios is a full-service creative agency with a client list representing the most influential employers on earth along with the high growth tech companies.

The team that brings the work of Uncubed Studios to life is made up of award-winning experts in cinematography, journalism, production, recruitment, employee engagement, employer branding and more. 

Interested in speaking with Uncubed Studios? Email us at studios@uncubed.com

 
Previous
Previous

Employer Brand Rankings: January 2023

Next
Next

2023 Employer Branding Trends (& Which Ones to Ignore)