Increase of AI in HR brings new litigation risks
People analytics is a powerful and effective tool for HR teams. But it is not a panacea against bias and prejudice.
Hiring teams leverage people analytics since it helps employers find the best talent and evaluates their recruiting methods. It also identifies personality types and characteristics that predict success over time for specific positions. Some companies use AI-powered facial scanning tools for remote interviews. These tools can predict a candidate’s fit for a position through analyzing speech patterns, expressions, and eye movements.
Some employers use an AI-powered gamification tool to pick and evaluate employees. One company used gamification for selecting call-center employees. They gathered data about successful employees—ones that not only succeeded at the job but also stayed. Using the data, this company developed a “game” that would test applicants and determine if they had the targeted characteristics. By using this new tool alone, this employer improved retention by 20% over four years.
HR departments are also using analytics for performance management, such as conducting employee evaluations; developing pay for performance metrics; ensuring employees are on the right career track; developing appropriate KPIs; and improving the quality of life for their employees. They are also using analytics for employee training by identifying KPIs through analysis of training effectiveness, increasing employee feedback, and improving employee engagement by using gamification concepts.
People analytics is also being used to improve employee retention. Some services allow employers to track employee patterns of conduct and sentient analysis of email to look for deviations presenting a risk of departure. Also, employers are using data analytics and AI to define correlations between resignations and factors like pay increases, training opportunities, promotion wait times, benefits, changes to management structure, and the effectiveness of the manager.
Some companies are also using predictive analytics to control health costs. These tools are used to determine the program effectiveness, look for gaps in coverage, control costs, and improve overall plan performance. For example, one data company works with self-insured companies. It mines employee medical claims and prescription data and gathers data from the employees’ social media and search queries. They combine this data, apply their algorithms, and predict the employer’s health care costs. It can also be used to predict where employers may see an increase in leave applications.
A major risk of all this, of course, is discrimination lawsuits. At their core, AI and analytics are supposed to be purely objective. But that’s not always the case. If an AI tool or predictive model is built using the profiles of past, successful employees, the tool will look for characteristics of those types of people. If these “successful” employees are not diverse, the tool will favor non-diverse candidates. By doing so, the tool may perpetuate prior discrimination or unfairness.
Because these tools are used so broadly by HR departments, their use could lead to more disparate-impact litigation. Disparate impact is a facially neutral policy that has a discriminatory impact on a protected category. A bias-influenced analytics tool could fit this definition perfectly. It’s applied broadly, it’s facially neutral, and it certainly could have a negative impact on protected classes. As more companies implement AI tools and predictive analytics, we will likely see an explosion of disparate-impact litigation. Also, there are many disability law implications for employers to consider. The use of gamification can disadvantage a disabled prospect or employee and the use of facial recognition tools present obvious ADA issues. The potential legal implications of these tools are substantial, and these risks will continue to develop, grow, and evolve over time.
What can be done right now to address these risks?
First, know that analytics and AI tools are not bias-free—these tools are not faceless paragons of objectivity. Because they are built by humans and are based on human characteristics, they are subject to all types of bias.
Second, understand the black box algorithms are complicated, like a black box. It’s hard to decipher what’s really running behind the scenes. Even so, you cannot assume that these tools are bias-free. You must understand what correlations are being used, how the tool was trained, and if/how the tool was tested.
Third, the computer should not be the final arbiter. AI and analytics can be very powerful and effective. But they do not replace independent judgment and common sense. They are a tool in the toolbox but far from the only one.
Fourth, be aware of employee privacy concerns. Just because you can gather (or buy) data does not mean that you should. Think of how your employees would feel if they knew how the company was analyzing their “personal” (but public) data. And think about how a jury would react to your efforts to collect and use employee data. Plus, the GDPR is coming to the United States whether we like it or not. This will change employees’ expectations regarding their individual privacy rights at work. The pandemic and the increase in telecommuting also have increased employees’ awareness or their privacy rights (or lack thereof) and triggered any employees to expect great privacy at work.
People analytics are not going away. They are powerful, effective and proven in many ways. But they are not a panacea against bias and prejudice—far from it. These tools can be very important for HR departments, but they present great risk as well. Counsel must understand these risks and anticipate the litigation that will surely follow.
David Walton is a partner in the Philadelphia office of Fisher Phillips, one of the country’s largest labor and employment law firms representing management. He focuses his practice on trade secrets, restrictive covenants, and employment litigation, with a particular emphasis on using legal innovation and evolving technologies to help achieve clients’ desired outcomes and enhance the delivery of services. He also frequently writes and speaks on legal technology, digital forensics, cyber law, e-discovery, and employment litigation.