Is AI an intelligent way to make employment decisions?
AI programs are built upon algorithms created by human beings—but humans are biased, fallible and fundamentally human.
Artificial intelligence seems to be society’s new sliced bread, with everybody wanting a piece of it.
AI guides us along city streets and chooses music to calm us down while we’re stuck in traffic. It cooks dinner for us and creates mood lighting. It reads us books and answers our questions. And increasingly, it tells companies, large and small, whether they should hire us to work for them.
This last act of artificial intelligence may, however, not be so very intelligent. AI programs are built upon algorithms created by human beings—but humans are biased, fallible and fundamentally human. Which means that any algorithm used to make AI decisions in the workplace may also be biased, fallible and—gasp—human.
In other words, if AI is meant to mimic human behavior or decisions, it follows that those choices may also be biased. When AI sifts and sorts through applications, it may (intentionally or unintentionally) favor certain information or words while rejecting others. Applicants may never know what magic word could have made the difference for them.
This may explain why Congress last year introduced the Algorithmic Accountability Act of 2022. The law would require the FTC to establish rules for companies to conduct impact assessments of their AI systems. Such assessments, presumably, would identify the go/no-go points within those systems and their real-world consequences.
Other federal bodies have also jumped on the AI accountability bandwagon. In October 2022, the Office of Science and Technology Policy, concerned about the impact of AI on the hiring of individuals with disabilities, released its Blueprint for an AI Bill of Rights. The document was designed to foster more equitable and inclusive digital hiring of workers with disabilities, as well as other underserved communities.
The Equal Employment Opportunity Commission held a hearing in January on how to prevent unlawful bias in the use of AI and its website on the “Artificial Intelligence and Algorithmic Fairness Initiative” looks at fairness in the AI-utilized workplace. On April 17 the Department of Labor hosted an online “think tank” to examine the use of AI tools in hiring.
With the advent of exciting new technology, such as ChatGPT, lawmakers are finally taking seriously both the benefits and dangers of AI. But the first major legislation to address its use by employers will roll out this summer in New York City. Local Law 144, initially scheduled to become effective April 15, will now take effect July 5.
The law governs employers’ use of “automated employment decision tools,” or AEDTs. AEDTs are “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation” and that is used to “substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”
It applies to any AEDT that (i) relies solely on simplified output (e.g., scores, tags, classifications, or rankings) without consideration of other factors; (ii) uses simplified outputs as “one of a set of criteria” weighed more heavily than others; or (iii) uses simplified outputs that overrule conclusions from factors such as human decision-making.
Companies will be barred from using AI to screen candidates for hiring or promotion unless, within one year of using the AEDT, they conduct an audit, through an independent auditor, to determine whether there is bias in the tool and post the audit results on their websites.
The audit should evaluate an AEDT’s potential disparate impact on a group of job applicants or employees based on demographic categories that mirror EEO-1 reportable data such as sex, race, and ethnicity.
Companies that use AI must notify employees and job candidates that they use AEDT, identify which qualifications the AEDT assesses, and inform them about the types and sources of data collected for AEDT purposes. They must also explain their data retention policies and provide candidates with the opportunity to ask for a different selection process or reasonable accommodation, if it is available. Failure to comply with the law can subject violators to fines ranging from $500 and $1,500 per violation, per day.
The law also provides a private right of action for employees and candidates who believe they were denied jobs or promotions as a result of bias in the use of a covered AEDT. In other words, the process should be fair and transparent. If employees or applicants believe they were denied an equal opportunity due to AI, they should know their rights.
New York City employers should therefore take the time now to review their hiring and promotion processes to determine if they are using AEDTs to which the law applies and if they want to continue using such tools for these purposes. If those employers elect to use AEDTs that are subject to the new law, they must budget for and identify qualified independent auditors who can conduct the required bias audits and they must establish policies for data retention, as well as providing appropriate training on the law to recruiters and other HR professionals.
Related: Benefits advisors are burned out: How AI-based tools can help
The New York City law may be the first comprehensive employment AI law, but it will not be the last. California’s Privacy Rights Act, effective January 1, is expected to address employer AI and notice requirements. Illinois now requires employers to obtain informed consent from applicants whose video recordings are analyzed by AI for hiring purposes. A New Jersey bill similar to NYC’s AEDT law is now under consideration. It’s just a matter of time before federal laws catch up with what state and local governments are doing.
Joseph Jeziorkowski is a partner with Valiant Law, with offices in New York and California, and represents clients in sexual harassment, discrimination, wrongful termination and other employment matters.