AI programs used by employers may lead to Title VII discrimination claims
The Equal Employment Opportunity Commission (EEOC) has provided guidance that AI programs will trigger Title VII discrimination violations when protected classes are disfavored in employment selection processes.
The use of artificial intelligence (AI) programs to make employment-related decisions creates a risk for employers to be subject to Title VII discrimination claims. AI programs are complex machines that can potentially generate biased results against protected groups of people. Employers are becoming vastly dependent on AI programs to make decisions in hiring, promotions, terminations, and monitoring employee performance.
The risk of bias in conjunction with employers’ increased dependence on AI programs is leading to greater regulations and potential liabilities. The Equal Employment Opportunity Commission (EEOC) has provided guidance that AI programs will trigger Title VII discrimination violations when protected classes are disfavored in employment selection processes.
Under EEOC guidance, AI programs that “make or inform decisions” pertaining to employment related selections will be treated as a selection procedure in Title VII discrimination claims. Regardless of an employer’s intent, selection procedures are discriminatory when individuals in a protected group are selected at a “substantially” different rate than the selection rate of individuals in a nonprotected group.
If a “substantial” difference exists, the employer must establish the selection procedure to be “job related and consistent with business necessity.” Unless the charging party shows there is a “less discriminatory alternative available” for the employer to use in place of the AI program, the Title VII claim will fail.
The EEOC guidance provides employment decision making AI programs will be treated as selection procedures which raises various issues for employers to consider. AI programs develop bias based on data and computational techniques. Employers that use old employment selection data run the risk of reproducing any biases that existed in previous selections.
The use of biased data, where a nonprotected group was historically chosen more often than a protected group causes the AI program to follow suit. If previous employment selections lack data or include biased data, this could cause the AI program to exclude protected groups. While the exclusion is not intentional, it is difficult to create a completely unbiased data set for an employment-related selection. Thus, there is significant risk that AI programs may develop discriminatory results.
Another issue is that certain AI programs can evolve on their own. AI machines that “learn” on their own make it difficult to understand how and why certain results are produced. This poses a problem for employers because monitoring and fixing an AI program that is generating biased results is difficult where the cause of the bias is unknown.
AI programs that “learn” organically may pose a risk to employers because they will have an inability to detect biased results. Where employers fail to detect biased results, regardless of intention, employers likely will be held liable under Title VII for using a selection procedure that was discriminatory.
EEOC guidance provides that AI-based selection procedures that produce biased results must be “job related” and demonstrate a “business necessity” to prevent liability under Title VII. Case law and EEOC guidance discuss how other selection procedures meet these requirements, but it is unclear how these standards will apply in the context of an AI program. This makes it difficult to understand when the use of an AI program satisfies the “business necessity” and “job related” requirements to prevent a Title VII violation.
The standard for selection procedures under Title VII makes clear that biased AI programs are unlawful if there are less discriminatory means available. Most AI programs have functioned to replace the decision-making aspect once conducted by a human. This calls into question whether humans are a less discriminatory means that are available. Humans have their own biases that may lead to protected groups being favored over nonprotected groups. Thus, humans may be equally as discriminatory.
The initial purpose for employers to use AI programs was to eliminate human bias in employment-related decisions. It is unclear, however, and difficult to establish whether AI programs or humans serve as the better option to produce unbiased employment selections. As such, it will be challenging under Title VII to determine which method functions as a less discriminatory means.
Additionally, issues exist regarding who should be liable for discriminatory AI programs that violate Title VII. EEOC guidelines indicate employers may be liable, even if the AI program was administered or designed by a third party because employers make the final employment-related decision. Employers seek out third parties for support in various employment-related processes. The third party may not be making the final decision of who to select, but they can directly exclude people from the selection process. For example, an employer administers a third party to screen 100 resumes down to 50 resumes.
The third party uses a discriminatory AI program that demonstrates a substantially different selection rate between a nonprotected group and a protected group. Here, the employer was not making the decision to exclude certain people directly because they are relying on a third party to narrow the results. Employers administer third parties to be involved at various stages of employment-related decisions, where the employer is minimally involved in the selection process. Where employers delegate this responsibility to third parties, it is unclear who is liable if a Title VII discrimination claim is brought.
Related: AI hiring bias? EEOC targets HR artificial intelligence tools in new enforcement plan
In addition to the potential Title VII violations, biased AI programs may perpetuate a lack of diversity and inclusion. While many employers are seeking to increase diversity in their workforces, AI programs may not be the solution. The complexity around AI programs makes bias difficult to detect, which may subject protected groups to continuous disfavored treatment in employment practices. To prevent mistreatment, employers should continue to monitor their AI programs and ensure that results are not favoring one group over another.
Many issues exist within the realm of employers using AI programs to make employment- related decisions. AI programs can generate biased results against protected groups that are subject to the protection of Title VII. There are issues around both the creation of AI programs and access to unbiased data sets.
Additionally, there is confusion regarding who should be liable for discriminatory conduct when multiple parties are involved in employment-selections. Each of these issues are important considerations that affect the potential liabilities attached to the use of AI programs in employment-related selections.
Erika Collins is a partner in Faegre Drinker Biddle & Reath’s New York office where she handles complex cross-border employment and human resources needs. Ari Rutledge is a summer associate with the firm.