AI is as powerful as it is misunderstood. In the HR world, automated decision-making processes have tremendous potential impact, but it comes with one inherent stumbling block. Below, we present AI’s strengths, pitfalls, and how to control both.
Strengths of AI
Much of AI, especially in the context of HR, has to do with pattern matching. In particular, many HR professionals are starting to look to AI to help them determine the best fit for a job position.
If a “best fit” can be defined in a measurable way, AI is a natural tool to bring to bear.
AI is great at using statistics to discover patterns. It can also be trained to find relevant patterns based on certain goals.
Real-World Examples of AI:
- Given a picture of a hand-drawn letter, it can likely determine the letter. This is pure pattern recognition.
- Given a sample set of data, AI can often make future predictions.
- Given a resume of someone successful, AI can find the closest match from a set of applicants.
In short, if you can define what a successful best fit or match is and can provide a rich set of examples, AI engines tend to be able to predict “good fits.”
Take-Away Point #1: AI works best when it has a clearly defined measurement of a match.
Take-Away Point #2: AI needs the right data.
HR Pitfalls with AI
One of the biggest issues with AI in the HR space is around bias: explicit bias, implicit bias, historical bias, etc. When we think about bias in AI, we likely picture programmers or data scientists who bring personal prejudice into their work, whether intentionally or subconsciously. In truth, it’s usually not that ill-intentioned, and far more complex.
Let’s remember that one of the primary goals of AI is to determine patterns and predict outcomes. Unchecked, AI will naturally perpetuate patterns. In HR, that means AI may enforce historic hiring biases.
In the Fall of 2008, AI could have been used to create prediction models for the election. In doing so, we might have compared each candidate to all previously elected presidents. As part of that exercise, the model would need to be fed data on each candidate and all previous presidents. If that data was only one attribute, race, then the AI engine would have noted the pattern that all previously elected presidents were white, without exception. Since AI is all about predictions based on past data, and all previous presidents were white, it would have not predicted the Obama win. If, instead of race, the AI engine was only fed polling data by state it would have had a very different prediction.
Take-Away Point #3: The nature of what data you use in the analysis of a situation can change the conclusion of that analysis. In particular, the bias of AI can be changed based on the data that is provided.
HR Control over AI
Recruiters often sort through large numbers of resumes from potential candidates to select the “best fit” for the job. As you’ve gathered from above, when you hear “best fit,” it’s an opportunity to ask yourself if AI can be helpful. The answer many in HR are coming to is “Yes.”
Many companies have come up with different models for assessing talent. The most prevalent is based on the idea that “Nothing succeeds like success,” the hypothesis being that a history of success is the most likely predictor of future success. But, as we saw earlier, we need to be careful about what features we look at when using history as a predictive mechanism.
Keeping company history in mind is critical. If your company has a history of racism or sexism, and you provide certain supporting information to an AI engine, then it will likely perpetuate those behaviors. If your most common success stories predominantly involve a particular race or gender, then without intentional action, your AI engine may well continue recommending that race or gender as the “best fit”.
When it comes to resume screening, being more mindful, we might want to disregard certain attributes. By intentionally removing access to data on race, gender, and ethnicity, AI engines can’t make decisions based on those features. One interesting tool for removing bias-friendly data is called Blendoor, an ATS add-on that strips out names in an attempt to lessen cultural and gender bias.
There are also tools to provide a more diverse applicant pool. Consider this line in a job posting:
“I want an aggressive go-getter with tons of drive and determination.” Sounds reasonable, right? After all, you’re not specifying a man or a woman. You just want those qualities. When you post the job and 90% of applicants are male, that’s not on you, right? This is where TextIO comes in. They’ve made a business of informing people about implicit gender bias based on verbiage. Not only will their tools point out gender-targeted wording, they’ll provide alternatives to help communicate in more receptive ways.
- AI is great for finding patterns, including those you want to change. In this way, AI can either reinforce and perpetuate patterns of historic racism, sexism, and other biases, or it can be an engine of true change.
- HR is capable of controlling the role of AI in their organization
- The thoughtful selection of data and features that’s provided to an AI algorithm is vital.
- Modern AI HR tools can be a very powerful tool when partnered with human judgement.
HR departments are a very significant buyer of AI technology. As such, HR leaders are in a position to require insight into automated decision making from vendors, and can help them shape these powerful engines as AI becomes a bigger part of their toolbox.