Navigating the hype of AI tools in HR
Sometimes it’s better to be first to a new technology, but in the case of AI in HR, it may just be better to be smart than first.
AI is the hot topic of 2023 in every industry. The possibilities of the technology seem limitless, and in every discipline, people are asking themselves how they can take advantage of the technologies and stay ahead of their competition. HR is certainly no exception to this trend, and there are already a range of services available for recruitment, career development, and employee management.
The pressure is on, but the road is complex
In the present hype around AI, it’s tempting to feel that organizations that don’t adopt quickly will be left behind. While there’s little doubt that AI will become an important part of core HR functions in the coming years, how exactly that will look isn’t clear yet.
To help navigate the hype and the complex legal, ethical, and business landscape of AI tools, we’ll look at two decision-making models for adopting AI-tools and discuss some of the philosophy and ethics around AI that are unique to HR.
Beating the hype
The analyst firm Gartner developed a now famous model for understanding how hyped technologies move from initial development to business usefulness. The Gartner Hype Cycle begins with a surge of interest and early adoption followed by a period of disillusionment when the technology doesn’t live up to the promises made in the hype. Finally, the technology gradually matures into a form that businesses can reliably use and adopt. Gartner’s 2022 Hype Cycle Report placed all AI technologies somewhere before that final stage of stability with generative AI (the kind that generates data, like ChatGPT), right near the peak.
The Hype Cycle is all about risk — the earlier in the cycle an organization adopts a technology, the less certain it can be that it will deliver value. With this in mind, there are two guiding principles HR departments must keep in mind as they feel the pressure to integrate AI into their processes:
- AI is a wide set of technologies, not a solution. This may sound obvious, but when the trend is on, it’s easy to fall into the mindset that unleashing AI in a tricky area will inherently offer value. This is particularly easy to imagine as we’re bombarded with stories about how game-changing AI will be (particularly generative AI) in areas like software development.
- AI has the potential to make some HR processes worse. While it’s not good for any business function to make mistakes, HR deals with some of the least quantifiable and most complex components of an organization: people. It’s AI’s interaction with people that poses some of the biggest challenges for the technologies as a whole, including some very existential issues like preventing AI from harming people. For this reason, it’s essential that HR departments proceed into the world of AI with caution.
Applications vary widely
While recruitment, training, performance evaluation, career development, and employee management are all within the purview of HR, the application of AI in each of them offers wholly different pitfalls and possibilities.
One of the fantastic applications for AI is automation, but there’s a particularly wide gulf between automating tedious, preliminary activities and automating decisions. By overlaying this distinction on the spectrum of HR functions, we can get an idea of the many possibilities for AI in HR today, and how we feel about them (is automating initial recruitment screening with AI tools OK? How about automating job offer decisions?)
In the areas of recruitment and performance evaluation, the human nuances involved invite particular caution when considering current AI tools.
AI has been alternatively cited for improving bias in hiring, and also at times making it worse. Last year, the U.S. Justice Department warned organizations that using AI tools for selection can lead to violations of the Americans with Disabilities Act (ADA). It appears that in 2022 at least, AI tools were having trouble matching human HR professionals in fully addressing the often complex social and legal environment around disabilities and reasonable accommodations. Separately, in 2020 a group of U.S. senators wrote to the chair of the Equal Employment Opportunity Commission expressing concerns that hiring technologies like gamified interview elements and personality tests could reinforce or introduce unwanted bias in hiring.
At the Myers-Briggs Company, we’re particularly interested in examining and clarifying the proper and improper uses of personality tools in hiring because although it’s very popular, the Myers-Briggs itself can’t be used for hiring, selection, or performance prediction. This doesn’t diminish the value of the Myers-Briggs because it’s not designed for hiring and selection. Instead, it illustrates that we need very precise understandings of the specific tools we use in order to get the most value from them and minimize risk and harm.
This is certainly true for any AI solutions that include personality elements, but it’s true of all AI technologies. Any assessment of an AI project must begin with a precise understanding of the technology, its capabilities, and particularly its limits.
Assessing AI projects
The Gartner Hype Cycle examines a technology’s risk profile by looking at technology maturity and adoption. The Harvard Business Review has developed a similar risk-oriented model for choosing an AI project (in this case they focus on generative AI, but the concept is portable). With a precise understanding of the technologies under consideration, organizations can use this model to see just how risky and necessary an AI project is.
The HBR model looks at a project along the axes of risk and demand. Risk in this context can be thought of as: “How much damage could this do if something went wrong?” Demand is conceptualized as: “How much do we actually need the capabilities or benefits this technology offers?”
Each organization must answer these questions for themselves and assess their own risk tolerance for each technology or project they’re considering, but this is a great way to help clarify what is a complex and presently hype-charged space.
Get the basics right first
The demand axis in the HBR model reminds us that while there are possibilities in AI technology, we must take an honest look at how much we need the help AI offers and if that help will really help achieve objectives.
Before considering an AI technology or project (particularly for hiring and selection), it’s essential that an organization has clear diversity and inclusion objectives and an understanding of how well it’s already meeting them. It must also have a clear picture of how well it’s already addressing unwanted discrimination and bias in the selection process. If bias, discrimination, and diversity objectives are already being addressed effectively, introducing AI could increase overall risk.
Read more: Why AI can never replace the H in HR
Finally, companies should ensure that only reliable, valid, and fair assessment results are used to inform any AI-generated selection decisions and that the process is equitable and transparent. A lot of industry-wide promises are being made on behalf of AI right now, but before it becomes transformative for HR as a whole, individual organizations will be the laboratory for the legal, financial, and ethical facets of AI.
Sometimes it’s better to be first to a new technology, but in the case of AI in HR, it may just be better to be smart than first.
John Hackston is a chartered psychologist and Head of Thought Leadership at The Myers-Briggs Company where he leads the company’s Oxford-based research team.