5 ways for employers to avoid discrimination when using AI
Businesses should prioritize the implementation of ethical AI tools.
For decades, companies have harnessed the power of artificial intelligence (AI) in various business applications. However, the more recent utilization of AI tools in employment decisions has intensified, igniting concerns about bias and discrimination against job applicants and employees.
Since decisions made by AI are only as good as the algorithms and data they rely on, decision-making may be influenced by biases inherent in the algorithms and data used to generate the results. Sometimes this results in disparate or adverse impacts on protected groups.
The Equal Employment Opportunity Commission forbids employment practices that have a disparate impact on protected classes of workers. It recently issued a technical assistance document specifying that the use of AI tools for employment decisions is considered a selection procedure and that an adverse impact on a protected class caused using an AI tool is a violation of Title VII of the Civil Rights Act, which prohibits employment discrimination.
The guidance also indicated that an employer will be held responsible for employment actions taken based on decisions made by an AI tool even if the AI tool was developed by an outside entity.
This does not necessarily mean that employers should stop using AI tools altogether. When AI tools are used carefully, they can actually reduce bias and discrimination in employment decision-making. Employers that want to act responsibly and avoid litigation should implement the following steps.
Human oversight and accountability
A specific person or team should be assigned the responsibility of monitoring and approving the use of AI. These employees should be knowledgeable about the potential for bias, how to counteract it, and the consequences of adverse impacts on protected classes. The best practice is to require that all decisions based on AI recommendations be made by a human.
Establish policies for AI use
Policies should include terms that limit usage for employment decisions, require employees to disclose when they use AI to complete a task, clarify expectations for data privacy, confidentiality, and security, and detail consequences for employees that violate the policies.
Training and education on AI
Educating employees on the authorized use of AI and the potential for biased results can be used to combat irresponsible use of AI. Employees should not be allowed to use AI for business purposes without understanding the potential for disparate or adverse impacts against protected groups. Training should include tips for reducing or eliminating bias in the results.
Implement ethical AI tools
Businesses should prioritize the implementation of ethical AI tools. Companies that offer ethical AI tools are honest about the potential for bias inherent in the data. They are also transparent about how they identify biases and reduce their impact on decision-making. Given the sensitivity of personnel data, businesses should seek AI companies with robust practices regarding data privacy and security. If it is not immediately obvious on their websites, companies should check the terms and conditions before using them.
Related: 65% of HR managers say using AI tools will be a top ranked skill, study finds
Regular audits of employment decisions
Finally, companies should conduct regular audits of their employment practices including the use of AI to ensure that they are not creating a disparate or adverse impact on a specific protected class.
Stefanie Camfield is an assistant general counsel and human resources consultant at Engage PEO.