Hiring and promotion using AI: Prepare for the legal (r)evolution

In the near future, AI may become a necessary part of every employer’s hiring process to varying degrees.

Credit: Adobe Stock

The use of artificial intelligence (AI) has the potential to revolutionize the way companies recruit and select candidates for job openings and promotions. However, as with any new technology, there are concerns about how it may be used and the potential for unintended consequences.

AI contributes to the recruitment and selection of candidates for hire or promotion in several ways:

While the use of AI in hiring has the potential to make the process more efficient and objective, it is important to take steps to mitigate the risk that it does not perpetuate existing inequities or lead to discrimination against certain groups of people. Proposed and recently enacted laws and regulations aim to ensure that the use of AI is transparent and that job candidates are treated fairly and have the right to appeal decisions made by AI systems.

EEOC guidance

The U.S. Equal Employment Opportunity Commission (EEOC) recently identified discrimination caused by AI tools as a key enforcement priority as part of its proposed 2023 Strategic Enforcement Plan. The EEOC plans to focus on instances where the use of technology contributes to discrimination against individuals on the basis of their race, ethnicity, age, sex, disability, or other protected classification. The EEOC specifically noted risks found in the “use of automated recruitment, selection, or production and performance management tools; or other existing or emerging technological tools used in employment decisions.”

Discrimination can take the form of intentional discrimination, such as the online tutoring company sued by the EEOC, which allegedly programmed its online software to automatically reject female applicants age 55 or older and male applicants age 60 or older. Or discrimination can be inadvertent. The EEOC recently published guidance regarding the risks of disability discrimination in this context, warning employers to have safeguards in place to prevent such outcomes.

Guideposts from New York City’s new law

In the first U.S. law of its kind, New York City Local Law 144 of 2021 regulates the use of “automated employment decision tools” in the screening of candidates for hire or promotion within New York City. Enforcement of the law starts on April 15, 2023. While the currently proposed interpretive rules add some clarity, it is possible more changes will occur before the law’s effective date.

New York City’s law is likely to provide a roadmap to similar laws across the U.S. Key components of the law include:

Other legal trends

Regulation of AI continues to evolve, with the list of pending and proposed laws rapidly expanding (for example, the National Conference of State Legislatures tracks developments here).

Additional notable legal developments include:

Practical steps for every employer

The practice of using AI to assist in critical decision-making is already ubiquitous and poised to expand as these tools become more readily available and less expensive to deploy. Companies embracing AI may obtain a significant competitive advantage over their peers, and in the near future, AI may become a necessary part of every employer’s hiring process to varying degrees.

Read more: Are AI hiring tools adding risks to your business?

Practical steps for every employer now include:

This publication is co-authored by law firm attorneys and artificial intelligence (AI). The human authors generated this article in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the human authors supplemented it with additional legal trends and analysis and reviewed, edited, and revised the AI’s language to their own liking. The human authors take ultimate responsibility for the content of this publication.

Nicole Stover & Mischa Wheat, with contributions from GPT-3.