Battling bias in workplace predictive AI: Challenges for employers
The use of AI in employment is truly a new frontier. AI reflects the world from which it learns and those who create it. Employers cannot rely on AI tools alone for employee recruitment and management. Proactively mitigating algorithmic bias is not only warranted but, in some forward-thinking jurisdictions, mandatory.
Algorithmic bias in predictive artificial intelligence (AI) tools used in recruiting and human capital management potentially giving rise to violations of Title VII of the Human Rights Law of 1964 or other discrimination laws is an important issue, one that has caught the attention of industry and regulators. Large employers who manage a significant volume of applications increasingly rely on AI predictive algorithms to analyze data and make employment-related suggestions and decisions.
Yet, caution is urged. The adage of garbage in, garbage out applies to AI. The unconscious biases of AI’s human creators and the more blatant bias embedded in the datasets from which AI learns become part of the algorithmic “thought” process of the AI tool itself. Employers cannot rely on AI tools alone for employee recruitment and management. Proactively mitigating algorithmic bias is not only warranted but, in some forward-thinking jurisdictions, mandatory.
The law
Title VII and related discrimination laws prohibit intentional discrimination in employment (disparate treatment) on the basis of a person’s race, color, religion, sex or national origin, age, etc. The statute also prohibits “disparate impact’ discrimination, which results from employers using facially neutral tests or selection procedures that have the effect of disproportionally excluding persons based on the foregoing protected classes if the tests are not “job-related for the position in question consistent with business necessity.” A selection procedure is any “measure, combination of measures, or procedure used for making employment decisions.” Assessing Adverse Impact in Software Algorithms, and Artificial Intelligence Used in Employment Selection Procedures, EEOC Technical Guidance, May 18, 2023.
Origins of AI bias
The EEOC defines AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions, influencing real or virtual environments.” Algorithmic discrimination, as defined in the White House Office of Science and Technology Blueprint of an AI Bill of Rights (October 2022), occurs when “automated systems contribute to unjustified different treatment or impacts disfavoring people” based on their protective class status. There are three main contributors to algorithmic discrimination. The people who design it, the dataset from which it learns, and the wording of the problem it is to solve. One can think of the dataset as nature and human designers as nurture.
First, the people who design and program AI systems are largely young, male, and white, with their own varied implicit biases. According to online job board Zippia, AI specialists are 90.1% male, 67% white, and have an average age of 44. Of that cohort, 63% have a college degree, and 17% have master’s degrees. Those studying AI bias posit that developers “code-into” algorithms their own biases.
Leigh Harvis-Nazzario, in the Rutgers Computer & Technology Law Journal (2022), stated: “We need to think about who gets a seat at the table when these systems are proposed since those people ultimately shape the discussion about ethical deployments of their technology.” Consideration should be given to having AI developers take unconscious bias training and for employers who are significant users of AI, hiring a chief of diversity and inclusion tech who is tech savvy to work with AI engineering teams.
Second is the dataset from which predictive or generative AI obtains “knowledge.” Typically, a developer controls the creation of a model and preferably utilizes historical real-world data to train their model. ChatGPT is designed to understand and produce human language using a dataset of billions of datapoints from websites, books, articles, and “Common Crawl,” a public dataset of billions of web pages.
Given that AI learns patterns and relationships in data, all the historical material on the internet, whether sexist, racially biased, etc., is, unless filtered out, scooped up and “learned” by the program. As articulated by the EEOC in its Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems: “Automated system outcomes can be skewed by unrepresentative or imbalanced datasets.”
Therefore, the EEOC warns that AI models must be built to be bias-free, relevant, and carefully curated to ensure that only relevant, high-quality data tailored to the specific context of employment is used.
A well-known example of algorithmic discrimination involved Amazon. It used AI to scour sites like LinkedIn and others to source and rank potential candidates. An October 10, 2018, article in Slate reported that Amazon trained its algorithms using a dataset of resumes of prior Amazon applicants, most of whom were males.
Using terms common to those applicants, the resulting output highly skewed towards men. The program essentially “taught itself that male candidates were preferable.” It penalized resumes, including the word “women,” such as entries that refer to women’s teams or certain colleges.
Third, how a query is written will affect the algorithm’s output. Computers need instructions to make predictions and suggestions. Thus, queries and job postings must be such as salary, managerial status, level of education, desired degrees, and level of experience must be weighted in such a way as to generate, if possible, an unbiased result.
Regulation of AI in employment
Attention has turned to the need to regulate AI to mitigate it algorithmically, to make AI more “responsible,” Al and Algorithmic Bias-is the Algorithm Biased, Hon. Katherine B. Forrest, 4C N.Y. Prac. Com, §79:12 (4th ed.). This involves processes to ensure algorithms are tested for bias and remain bias-free, as well as more aspirational statements of policy. An October 2022 White House-issued Blueprint for an AI Bill of Rights articulated five key principles. AI should be:
1) safe and effective;
2) free from algorithmic discrimination;
3) designed to ensure data privacy;
4) transparent and provide information about its design and function; and
5) allow users to opt out and individuals should have access to a human alternative.
Regarding employment, “Designers, developers, and deployers of automated systems should, among other things, take proactive and continuous measures to … use and design systems in an equitable way.”
There are no federal statutes specific to the use of AI employment tools. The May 23, 2023, EEOC Technical Guidance attempts to cover that shortfall. It states that employers, not vendors, will be liable for any discriminatory impact arising from AI use when “such procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group.”
Employers, therefore, will want to ensure that their AI tools are designed and tested to be without bias and that their HR teams screen the results to preclude disparate impact. Employers should remain laser-focused on legitimate, non-discriminatory factors, such as experience, skills, etc., when making job offers or other employment-related decisions.
State and local governments are also focused on AI bias. Taking the lead, New York passed Local Law 144 that “prohibits employers and employment agencies from using automated employment decision tools (AEDT) unless the tool has been subject to a bias audit within one year of the use of the tool, the information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.”
A bias audit of an AEDT must calculate the selection rate for each race/ethnicity and sex category required to be reported on the EEOC-1 form and compare the selection rates to the most selected categories to determine an impact ratio. While the New York Law has been criticized for focusing solely on hiring and promotion and only on race and gender discrimination, it is cutting-edge and a step in the right direction.
Related: How will AI move the benefits needle?
The use of AI in employment is truly a new frontier. AI reflects the world from which it learns and those who create it. It is, therefore, inherently flawed. It must be tamed and tested. Employers must learn to use, manage, and control this new tool to maximize its advantages while limiting its liabilities.
HR departments must become more tech-savvy and perhaps add new categories of staff. Regulation will likely grow, the extent of which will depend on the ability of industry vendors and users to self-regulate. Some believe AI may be almost impossible to regulate because it constantly evolves. Time will tell.
Richard Reice is a partner in the labor & employment and civil litigation practices in the New York offices of Messner Reeves LLP.