BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Is Changing The Recruiting Game; It May Also Be Violating The Rules

Forbes Human Resources Council

Robert Sheen is founder and CEO of Trusaic, a purpose-driven technology company focused on pay equity, DEI and healthcare.

Recent guidance from the U.S. Equal Employment Opportunity Commission (EEOC) and a landmark lawsuit show employers where they may run afoul of longstanding anti-discrimination regulations due to their AI recruiting tech.

There’s no question that AI-powered technologies are a game changer for talent acquisition teams, saving them time, effort and money at virtually every stage of the recruiting process. The question is whether these rewards outweigh the risks of using these cutting-edge technologies.

With roughly 80% of employers now using some form of AI in recruiting and hiring, the answer to this question appears to be "yes"—but it comes with some heavy caveats.

Caveat No. 1: Never assume AI tech is bias-free.

As we’ve learned from large language models and survey generators, AI tech is only as good as the data it’s trained on, and those data can easily reflect the perspectives, preconceptions and even the cultures of the technology’s creators and developers, whether accidentally or intentionally.

Indeed, the AI solution your organization is using right now to facilitate recruiting may violate Title VII of the Civil Rights Act of 1964, which prohibits workplace discrimination based on race, color, religion, sex (including pregnancy, sexual orientation and gender identity) or national origin.

This is exactly the situation a tutoring company found itself in recently when its AI-powered hiring software apparently automatically rejected female applicants over 55 years old and male applicants over 60. The EEOC filed a lawsuit on behalf of the nearly 200 rejected applicants, and the tutoring company agreed to pay a settlement of $365,000.

This was the first-ever settlement involving bias in artificial intelligence software, and it almost certainly won’t be the last.

Caveat No. 2: Caveat emptor (buyer beware).

Although the aforementioned tutoring company used its own internally programmed hiring software, employers who purchase AI tech solutions from external vendors aren’t off the hook. If the solutions they purchase and utilize create discriminatory impacts within their hiring process, the employers can be held liable even if vendors assured them these tools were in full compliance with Title VII.

This is so critical a distinction that it was highlighted in a technical assistance document issued by the EEOC earlier this year, which outlined how Title VII is applied to all systems that incorporate AI into HR-related activities such as recruitment and hiring. Additionally, the document provides guidance regarding disparate impact (a.k.a. adverse impact) cases and answers questions employers might have about using automated systems in their employment decisions.

The EEOC’s guidance, which is focused on preventing discrimination against job seekers and workers, places the burden of responsibility directly on employers.

Caveat No. 3: This burden of responsibility isn’t limited to the hiring process.

AI tech is now being leveraged across nearly all aspects of talent management—from basic performance reviews to compensation to career development and more. This means there are a staggering number of places where, without appropriate safeguards in place, employers using AI solutions might unwittingly violate Title VII. Again, employers are on the hook for ensuring compliance with anti-discrimination laws across all their AI-based decision-making tools.

AI tools used for pay evaluation and determination are a perfect example. With growing numbers of employers enhancing their pay equity and transparency, many are relying on third-party AI software and solutions for guidance. These tools face the same disparate and adverse impact scrutiny as recruiting tools.

How can you prevent AI-embedded bias?

There are a number of steps you can take to help protect your organization from violating Title VII.

The first is to ask technology vendors some very specific questions, ideally before making a purchase. These questions should include:

• What steps have been taken to ensure their AI products don’t cause an adverse or disparate impact on any group of job candidates or employees?

• Have they applied the four-fifths rule in developing their products (to ensure the selection ratio of minority groups is at least four-fifths of the selection ratio of majority groups)?

• Have they conducted evaluations and/or audits of their products to eliminate potential bias? If so, can they demonstrate this formally?

As previously noted, it’s not enough to simply accept vendors' word that their AI products have passed anti-discriminatory muster. The burden of responsibility for complying with Title VII is on you. Therefore, you should conduct your own product tests prior to implementing them, and your tests should include data sets that are as large and diverse as possible. This gives you a greater chance of uncovering hidden biases and, thus, protecting your company’s reputation.

Remaining compliant with anti-discrimination laws isn’t a one-and-done proposition, so it’s wise to continue conducting tests and reviews of your AI tools on a regular basis.

You should also work with your legal counsel to conduct annual AI bias audits. This may soon become mandatory for many employers, as it has for those in New York City. In July, the city passed the Automated Employment Decision Tool law, which requires employers who use AI and other machine learning technology in their hiring process to conduct annual audits of their recruitment technology. A third party must perform these audits, checking specifically for built-in bias (intentional or not). Maryland and Illinois now also have laws regarding AI during the hiring process. Given the proliferation of AI tools and tech, similar laws in other states are sure to follow. New Jersey, New York state, California and Vermont are all currently considering their own AI bias regulations.

Unfortunately, the risks of using AI tech in the hiring process aren’t well understood by the people who typically source and utilize them—recruiters, hiring managers, procurement professionals and HR staff. Providing these individuals with expert training in these tools and technologies is a huge step toward safeguarding your organization from built-in biases.

Conclusion

Ensuring responsibility in AI technologies is no longer optional. With so many employers having already adopted these technologies, protecting their organizations—as well as their job candidates and employees—from AI bias is now truly imperative.


Forbes Human Resources Council is an invitation-only organization for HR executives across all industries. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website