The Ethics of AI in HR: What Does It Take to Build an AI Ethics Framework?

Last Updated: December 16, 2021

Tackling bias is among Google’s top three principles to guide AI development. It is also the reason why Google’s recent AI Ethics Council was dissolved. Bias is evidently one of the biggest concerns in the implementation of AI technology, and even the search giant is facing challenges in defining an ethics framework to guide its development. So, where does HR fit into the context, and can an ethics framework for AI in HR be built effectively?

Artificial intelligence (AI) continues to be implemented across industries with positive impact. However, there is still no unified ethics framework that guides AI development and its application in all industry verticals.

To address this pressing issue, on March 26, 2019, Google announced that it had created the Advanced Technology External Advisory Council (ATEAC) to develop an ethics framework primarily to address AI-powered facial recognition and fairness in machine learning (ML). However, on April 4, 2019, within 10 days of its establishment, the ATEAC was dissolvedOpens a new window . One of the reasons was the controversy around the members of the council. Kay Cole James, President of the Heritage Foundation, was accused of harboring anti-LGBTQ and anti-immigrant beliefs. As a result, the company decided to dissolve the council and work with external researchers independently, to guide their ethics in AI development.

A year ago, when it was called out for working with the Pentagon to create AI-powered weaponry, Google created a set of seven principles that would guide their work in the applications of AI. “Avoid creating or reinforcing unfair bias” features as number two on this list. Clearly, the ability of AI to reinforce bias is apparent and acknowledged even by the frontrunners of AI technologies.

What does this mean for the world of human resources, where AI is increasingly being applied to processes such as candidate screening, recruitment, employee engagement, and compliance? Before we investigate this, let’s look at how AI reinforces bias.

Also Read: Why Artificial Intelligence Needs to Be Balanced With Human Intelligence In HROpens a new window

How Does AI Reinforce Bias?

As futuristic and promising as it may seem, AI is not without flaws. Its deep learning model relies on what in the context of human behavior is called “conditioning.” Just like humans can be conditioned to discern good from bad, and sometimes favor the bad instead of the good, so can deep learning algorithms learn from human bias. So, if the data has bias – even if it is unconscious and inherent – that’s what the machine is going to learn.

Bias begins in the data that is fed to ML engines. These engines are made up of neural networks that begin to make associations in the data. Let’s take, for example, an AI-powered recruitment tool. If its algorithm is based on data that shows that among 10 candidates for a tech job, 7 have a relevant college degree but 3 do not, the tool is likely to give preference to those who have the degree, even if those without one are more experienced in the field. It can keep reinforcing this bias by continually downgrading individuals who have no degree and reducing a rich talent pool on the basis of one factor, because of this bias.

However, AI can even be deployed to eliminate bias in hiring, when treated carefully and by eliminating the factors that contribute to bias, such as skin color, gender, education, and so on.

Also Read: How IBM Watson OpenScale Could Drive AI Adoption in Human ResourcesOpens a new window

How Can We Build a Unified Ethics Framework for AI in HR?

To truly understand the need for and the challenges in building a strong AI ethics framework in human resources, we spoke to Loren Larsen, CTO of HireVueOpens a new window , a platform that uses AI predictions to simplify the hiring process. HireVue was the first in the HR technology space to create an expert advisory board to guide ethical AI developmentOpens a new window .

Loren explains, “The biggest challenge in developing an AI framework is to ensure that it’s relevant and applicable enough for the here and now while also being flexible enough to cover the rapid evolution of AI technology.” As a rapidly evolving technology, AI must be built on a framework of clarity and the ability to turn philosophical questions into mathematical data that can tell right from wrong.

But Loren also offers a practical way to bring covert human bias to the forefront and address it: “HireVue believes that any ethics framework for AI needs to incorporate a vast set of perspectives – from academic to legal to social to technological.”

And to further simplify the challenge, he adds, “To be robust enough to serve as a North Star for technology developers, I think the best place for an ethics framework to be developed is within specific industries. This will have the most decisive impact on technology as it is applied to a particular area of practice, such as hiring.”

Also Read: How Artificial Intelligence is Humanizing HROpens a new window

A Final Word About AI Ethics Implications for HR Professionals

In AI-powered recruitmentOpens a new window , the collection of data is a long-drawn but fruitful process if done right. Organizations cannot afford to show bias in hiring anymore – even if it is unconscious. The rise in the global level of awareness about bias, the compliance issues, and the legal aspect of biased hiring are major factors that emphasize the need for a transparent hiring process.

We conclude with one more valuable input from Loren: “Because of the substantial potential impacts of AI technology on individuals and on society, a unified framework of guiding principles is critical – and must reflect a high degree of responsibility from companies developing AI solutions. If any truly unified ethics framework can be created, I think it will necessarily be extremely high-level and not finely grained.”

An ethics framework must be the foundation on which any AI technology is created and implemented. However, even in its presence, it may be a while before bias can be entirely addressed in the implementation of AI-powered solutions for HR. What such a framework can do is help companies create AI technologies to minimize, if not eliminate, bias in their algorithms. Then, combined with human intervention, AI applications can lead to unbiased recruitment and quality hiring.

Has AI been implemented in HR functions in your organization? What have the results of using AI in HR been? Tell us on our FacebookOpens a new window , LinkedInOpens a new window Opens a new window title="Opens a new window" target="_blank" target="_blank" rel="noopener"> and TwitterOpens a new window pages. We’re always listening!

Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.