BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Has Your Talent AI Been Audited?

Forbes Human Resources Council

Chief Executive Officer & cofounder at Reejig | World Economic Forum Technology Pioneer 2022 | Award-Winning Workforce Strategist.

If you use AI for any form of talent decision-making in your organization and it results in discrimination, whether it is by you or introduced by the AI, you are the one who is liable. When it comes to verifying the ethical nature of AI, this could be just the start of a global ripple effect.

AI has the power to do a lot of good, but working on big data comes with risks. I’ve spent over 20 years as a workforce strategist scaling teams for some of the largest major projects in the world and have witnessed firsthand the impact of not having visibility of the skills and capabilities of my people. I've seen first-hand the amount of potential that was being wasted on our people and our business, which motivated me to develop an independently audited ethical talent AI. Based on this experience, I'd like to share why HR leaders should evaluate their talent AI to ensure it has been audited.

Has your talent AI been independently audited? And if not, why not? The risk of making a mistake here could be too great for your people and your organization—one that comes with serious consequences.

Understand The Dangers Of Discrimination And The Responsibility Of Employers

Discrimination laws apply equally whether a person or a robot is responsible for an employment decision. As an employer, the risk of discriminating against an individual using AI is very real if you don’t know why a decision is being made. If your AI makes a poor recommendation that you can’t justify when challenged, your business could face the consequences.

This issue matters because individuals looking to be hired or secure a promotion don’t know how decisions are being made about them. They need to feel confident that boundaries are in place to prevent AI from doing harm, and we need to help them understand how it works.

For instance, New York City's Local Law 144, which is set to come into effect on January 1, 2023, says employers can only use automated employment decision tools (AEDT) that have been through a "bias audit" within the past 12 months. This law is part of a rapidly developing shift that protects the rights of workers around the world and puts the onus on AI vendors delivering employment decision-making support to become audited and truly ethical.

Any employer with staff in New York City, or who will be opening their candidate pool to New York City residents, must review all existing and new technology vendors who fall under this definition to ensure that they comply with this requirement. They must also publish a summary of the relevant audit findings on their website.

Job candidates must also receive notification of the tools used to assess their application before being added into the process, including details of job qualifications and characteristics that may be considered. They then must be given the opportunity to opt out and request an alternative process, with information about the type of data collected and data retention policies made available on request.

Employers should take note: It’s time to make sure your house is in order and ensure you’re using AI that is ethical, regardless of where your people are located.

Know There’s Still Work To Be Done

The non-profit New York Civil Liberties Union argues that LL 144 is a good start but doesn’t go far enough in protecting workers from bias in automated employment decision tools. It states that such technology "all too often replicates and amplifies bias, discrimination, and harm towards populations who have been and continue to be disproportionately impacted by bias and discrimination.”

Currently, the framing is focused largely on race, ethnicity and gender. These are the most common biases, but what about LGBTQIA communities, people living with disabilities, those from lower socioeconomic backgrounds and other disadvantaged groups?

We have long understood that ensuring AI doesn’t introduce or reinforce bias is critical to making it more broadly accepted in society. Companies may have to abide by further regulations, and New York’s LL 144 is proof that this day is here, with more regulations set to come.

The California Privacy Rights Act is set to update the California Consumer Privacy Act when it comes into effect on January 1, 2023. Alabama, Colorado, Illinois, Mississippi and Virginia have also enacted AI laws in the past couple of years. The White House has announced an AI Bill of Rights and the European Union continues to set the pace of consumer protections with its planned AI Act.

While there are inroads being made, there is work employers can do in the meantime.

Questions To Ask Your Automated Employment Decision Tool Vendors

Based on my experience, here are some questions employers can ask their vendors about how their talent AI is audited:

• Have you undergone an independent bias audit?

• If so, who conducted your independent bias audit? Did it include processes, data and algorithms scope? When is it valid for?

• When was the audit done, and do you have plans to have it audited annually?

• What was the independent audit process and will you share the audit outcomes?

• How do you support notice and consent requirements?

• What data sources do you collect your talent information from?

• What personal characteristics and job information do you collect? How do you use it to inform your selection process?

• How are your AI and machine learning built to reduce bias and provide ethical decision-making support?

• How are you regulating your AI and reviewing its selection process in relation to ethical standards and bias reduction?

Understanding where your HR AI stands in compliance with upcoming regulations now could save you pain in the new year—and give your company a chance to evaluate your tech to ensure it's compliant, ethical and free from bias.

Let’s never forget that a person is on the other end of all these decisions.


Forbes Human Resources Council is an invitation-only organization for HR executives across all industries. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website