AI, ADA and EEOC: Are your hiring practices compliant?

Employers are responsible for vetting potential bias in AI-based hiring tools and the EEOC warns of three potential ACA violations.

Employers should ensure they are selecting algorithmic evaluation tools designed with accessibility for individuals with disabilities in mind. (Image: Shutterstock)

With the abundance of software and artificial intelligence (AI) tools allowing employers to hire and assess candidates (with little to no human interaction) is creating a question regarding inclusiveness. Do these tools violate the Equal Employment Opportunity laws that protect people with disabilities?

Related: Are AI hiring tools adding risks to your business?

The U.S. Equal Employment Opportunity Commission (EEOC) has released guidance on how the tools may be violating existing requirements under Title I of the Americans with Disabilities Act (ADA). Employers are responsible for vetting potential bias in AI-based hiring tools and the EEOC warns of three common applications where these tools could violate the ADA. For example:

The guidance provides other practical steps for reducing the chances that algorithmic decision-making will screen out an individual because of a disability, including:

Lauren Daming, Greensfelder attorney and CIPP says: “While the EEOC’s new guidance is a big step toward helping employers evaluate their hiring practices for potential disability bias, it leaves many issues unaddressed. For example, the guidance recognizes that algorithmic decision-making tools may also negatively affect applicants due to other protected characteristics such as race or sex, but the guidance is limited to disability-related considerations alone.”

She adds that, “although the EEOC’s action highlights the potential for disability discrimination related to hiring tools, there are a variety of other employment and privacy laws potentially affecting the use of algorithmic decision-making in the workplace.”

Read more: