4 questions to ask before adopting AI in your hiring process
When considering bringing AI into your hiring process, your vetting process should prioritize the explainability of an AI tool over the promises the tech can deliver.
Artificial Intelligence (AI) presents countless exciting new opportunities to streamline processes, increase efficiency, and improve fairness and equity across the hiring landscape.
As more and more AI vendors are emerging in the marketplace, it’s imperative to ask the right questions when considering bringing AI into your hiring process. Failing to do so can set your organization up for major challenges, from lost time and resources to costly legal liability.
Here are the top 4 questions you should be asking any prospective AI vendor.
How explainable is the AI tool?
Explainable artificial intelligence (XAI) is essentially a set of AI processes and methods that human reviewers can understand and verify to validate the results. Explainable AI is, by definition, white box AI, meaning that the decision-making process is transparent and, ultimately, explainable. It’s incredibly difficult to validate the accuracy of results if you can’t understand how the algorithm arrived at a given outcome.
XAI models are important because unlike black box AI models, characterized by a lack of transparency and outcomes that cannot be sufficiently explained, explainable AI provides transparency into AI-powered decision-making. With XAI, humans can understand and validate the rationale behind specific AI recommendations and decisions, making those recommendations inherently more reliable – and defendable.
At a minimum, organizations adopting an AI-powered hiring tool should ensure their prospective AI vendors provide a solution that can adequately explain how its determinations are made. If you can’t understand and validate the decisions the algorithm is making, you will be setting yourself up for potential headaches down the road, as well as possible legal liability – especially in the context of hiring, which can have a major impact on individuals’ livelihoods.
Will this AI tool comply with new and pending regulations?
With new laws going into effect in New York and Colorado, and more than 15 other states considering AI legislation in some form, it’s important to understand how your prospective AI vendors are prepared to ensure compliance with emerging legislation.
From approaching AI regulations through amending existing privacy laws, to seeking to limit the use of AI in hiring without clearly communicating its involvement, as in the case of the New York law, organizations should expect regulations around AI to continue to increase and evolve, with greater emphasis on transparency and demonstrable absence of bias or adverse impact.
One key component of the new NYC Bias Audit law, which went into effect on July 5, is the requirement that a third-party bias audit is conducted on the technology on an annual basis. The results of the audit would also need to be made public. With expectations that this requirement will be a feature of other new, pending regulations, it will be important that your AI vendor is equipped and able to fulfill this requirement and, ultimately, pass this type of audit.
How has the AI tool been validated for accuracy?
Regardless of whether or not AI is involved, a vendor providing an assessment to be used for employee selection should be able to provide evidence that their assessment scores reliably and accurately predict important job outcomes (e.g., job performance), are relevant to the specific job(s) in question, and are fair and unbiased.
More specifically, you should be asking questions like, “What evidence can they show to demonstrate the reliability and validity of the outcomes?”, “What research and processes were used to develop this tool?” and, “What experience does the vendor have that qualifies them to develop this AI tool?”
Ensuring your AI vendors have built their algorithms on the foundation of proven research, and have taken the proper steps to validate their outcomes – not just internally but externally as well – is critically important. This is especially the case when using AI in a hiring context or any other area where decisions based on AI analysis have the potential to have a major impact on an individual.
The ability to validate that the AI model can accurately predict outcomes while also demonstrating the absence of bias should be a prerequisite for any prospective AI vendor.
How will the new AI tool impact the candidate experience?
Much of the emerging AI legislation is rightly focused on ensuring fairness for candidates – but how will the use of AI impact the humans involved?
The promise of advanced automation, like AI, is to make processes more efficient and generally easier all around. At the end of the day, AI tools should make experiences better – not worse. The most important question you should ask is, “Will this tool improve or worsen the candidate experience?”
In the case where an AI tool is being used to assess candidate skills, for example, the question should be, “Does the tool impose a burdensome step in the hiring process – like adding an hours-long skill test – or does it simplify the process by removing a manual assessment step altogether?”
Read more: How the emerging tech can help HR pros
When considering bringing AI into your hiring process, your vetting process should prioritize the explainability of an AI tool over the promises the tech can deliver.
It needs to be legally defensible and be able to pass a rigorous bias audit. There needs to be documented evidence of the tool’s efficacy and accuracy. And it should alleviate some of the tedious and stressful nature of the hiring process for candidates and hiring managers alike.
If your prospective AI vendor can check off those boxes, your organization is likely to bring on an impactful solution to your talent acquisition efforts.
Will Rose is the chief technology officer at Talent Select.