AI hiring bias? EEOC targets HR artificial intelligence tools in new enforcement plan

As more employers rely on AI for hiring than ever before, observers warn that without proper guidance on how the technology deals with discrimination, organizations could face growing civil rights lawsuits.

Credit: Golden Sikorka/Adobe Stock

If there is one piece of technology dominating the conversation right now, it’s artificial intelligence. Not only is it underpinning tools like ChatGPT, but it’s also upsetting industries like digital art and pushing copyright law to reevaluate its parameters.

On Tuesday, the Equal Employment Opportunity Commission met to discuss the real-time effects of the powerful technology on an area that’s been at the forefront of adoption—human resources.

Indeed, 42% of organizations with more than 5,000 employees relied on AI to support employment, according to a Feb. 2022 survey from the Society for Human Resource Management. Among those, 79% used it for recruitment and hiring. Another report from the Harvard Business Review found that 99% of Fortune 500 companies in the U.S. use AI for hiring—often touting its efficiency and ability to bolster diversity and inclusion.

But in a session titled “Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,” the EEOC invited expert panelists from the legal industry, tech, and academia to discuss their opinions on whether AI is working the way it should in HR processes.

While the technology might be rapidly adopted, panelists were less certain about its ability to discern the right candidates without violating anti-discrimination laws.

In fact, they stressed that since most machine learning systems are trained using data sets compiled by systems rooted in institutional biases based on race, gender, and sexual orientation—such as eviction histories, credit reports, pronouns, and criminal proceedings—they are just as likely to violate Title VII of the civil rights act as any human being.

“The employer doesn’t have to explicitly state a discriminatory preference. The software might simply learn those preferences by observing its past hiring decisions,” said Pauline Kim, Daniel Noyes Kirby professor of law at Washington University School of Law in St. Louis. “Even employers who have no discriminatory intent could inadvertently rely on an AI tool that is systematically biased. So these automated systems truly do represent a new frontier in civil rights.”

For example, one of the first cases Kim worked on as a new lawyer centered around a temp agency receiving requests to match its workers with certain required skills. However, one of these requests entailed a demand for white workers, and in fulfilling it, the company engaged in a discriminatory practice.

Kim added: “So Title VII clearly prohibits the blatantly discriminatory acts of the temp agency years ago and it undoubtedly applies to new forms of discrimination that are emerging today. However, the doctrine that is developed with human decision-makers in mind may need to be clarified to address the risks that are posed by automated systems.”

At the same time—a blanket prohibition of such protected characteristics is not helpful either, since it’s necessary for audits to diagnose discriminatory behaviors.

Related: ‘AI personally makes me pretty nervous’ employment lawyer says of hiring algorithms

Therefore, Kim stressed the need for better clarifications on how the law functions where AI in HR processes is concerned, and offered advice on how the EEOC could move forward.

“First, the agency could make clear that AI tools that cause discriminatory effects cannot be defended solely on the basis of statistical correlations,” and that they should focus on the “substantive validity” of selection tools. Meaning, a check on whether the AI tool is actually measuring job-related skills and attributes rather than simply relying on correlations.

Then, the EEOC could offer guidance on the duty of employers to explore less biased alternatives when it comes to choosing AI tools, she noted.

Several panelists noted their approval of the EEOC’s draft Strategic Enforcement Plan, which the commission released last month (published in the Federal Register, which is accepting comments until Feb. 9, 2023). The document focuses on strategies to crack down on discriminatory practices stemming from the use of AI in recruitment, screening, and hiring.

Additionally, they suggested that the EEOC also take guidance from the Americans with Disabilities Act as it continues its efforts.

Suresh Venkatasubramanian, the deputy director of the data science initiative and professor of computer science at Brown University, said the need for an AI Bill of Rights is vital to provide “guardrails” for what he considers to be a fast-evolving technology. He added that the guidelines provided by the ADA and National Institutes of Standards and Technology (NIST) are also an important basis for AI training in hiring.

“[The guidelines] say that claims that a piece of hiring tech is safe and effective, should be verified. They say the claims that tech mitigates disparate impact shouldn’t be verified. They say that this verification should continue after deployment on an ongoing basis,” he said. “Because after all, the key advantage of machine learning is that it learns and adapts to social verification.”