Heed EEOC guidance on AI-powered employment tools

Regulation of AI in the employment context proceeds.

Credit: tomertu/Adobe Stock

While more general questions about the use of artificial intelligence remain pending as noted in the above editorial, regulation of AI in the employment context proceeds. New York City rules on the use of AI in hiring and promotion are scheduled to take effect on July 5, 2023. And the federal Equal Employment Opportunity Commission (EEOC) recently issued non-binding technical guidance for employers on how to avoid illegal disparate impact discrimination when employing AI in the workplace. Disparate impact discrimination is broadly defined as use of a facially neutral standard or practice that nonetheless works to the disadvantage of a group protected by law. For example, a 5’10″ height requirement that is applicable to all job applicants but that screens out many, many more women than men.

The EEOC’s guidance is important for New Jersey employers not just because most are subject to Title VII of the Civil Rights Act of 1964, but also because New Jersey courts generally look to federal standards when interpreting our Law Against Discrimination. The EEOC’s recommendations for employers/statements of its position on liability for disparate impact related to AI-powered employment tools include:

• AI-powered tools for making or informing decisions about hiring, promoting, terminating, or taking other employment actions would generally be subject to the Uniform Guidelines on Employee Selection Procedures, which have been in effect since 1979.

• Employers can and should assess whether an AI-powered tool has an adverse impact on a protected group—where it causes a selection rate for the protected group that is substantially less than the selection rate for another group. If it does, its use will violate Title VII unless the employer can show that its use was “job related and consistent with business necessity.” An impact generally will be considered substantial when the selection rate for the protected group is less than 80% of the favored group (the EEOC’s 4/5th rule).

• An employer can be liable for a disparate impact caused by an AI-powered employment selection tool, even if the tool was developed by a third party and even if administered by a third party. Employers should take precautions by asking the developer or administrator of the tool whether it has evaluated it for causing disparate impact. But according to the EEOC an employer may be liable where it relies on a third-party’s incorrect assessment of whether the tool creates a disparate impact.

• The employer should ask the third-party whether it relied upon the 4/5th rule or other test of statistical significance in determining whether the use of the tool will have a disparate impact.

• When an employer learns that a tool would have a disparate impact on a protected group(s), it should take steps to reduce the impact or select a different tool. Employers should frequently conduct self-analyses on an on-going basis to ensure that their AI-powered selection tools do not have a disparate impact.

Read more: The EEOC’s priorities in 2023: 4 trends employers need to stay on top of

The EEOC’s guidance, although non-binding, stakes out its initial position on what employer obligations are and when it will find employers liable for disparate impact discrimination related to the use of AI-powered tools in the workplace. Its broad standards for liability, along with its call for continuous self-analysis should cause employers to consider carefully why, when and how AI is employed in the workplace.