Compliance and legislative trends on the use of AI and automated hiring tools
While AI tools can improve efficiency in the hiring and onboarding process, they can also (unintentionally) pose issues for candidates and possible compliance risk to employers.
The use of Artificial Intelligence (AI) and automated employment tools by employers continues to gain the attention of lawmakers. This evolving compliance trend is due to lawmakers’ concerns about possible discrimination and bias which may be unintentionally rooted in AI hiring tools. While these tools can improve efficiency in the hiring and onboarding process, they can also (unintentionally) pose issues for candidates and possible compliance risk to employers by introducing discrimination and bias into these processes. Bias and discrimination can potentially creep into the hiring process because of the way that the algorithms are designed. For this reason, the use of AI and automated employment tools has gained attention both at the federal and state level, and lawmakers are focusing on AI and how it’s used for employment purposes.
As far as legislative developments, I will point to the National Artificial Intelligence Initiative Act of 2020 which was carved out of federal legislation to ensure continued U.S. leadership in AI research and development. Unfortunately, one of the challenges with AI at present is that there still appears to be no single agreed-upon definition of exactly what AI is. More guidance is needed from regulators as to what constitutes AI and the specific concerns around its use in creating tools to support the hiring process.
U.S. federal developments
Given what is discussed above, here are some of the major federal developments to note:
- In April 2022, the Department of Commerce appointed 27 members to the National AI Advisory Committee, which is a federal government office now overseeing AI. Additionally, the White House in October of 2022 released what it calls its blueprint for an AI Bill of Rights, which can in part be tied to employment decisions which use AI tools and tying into the possibility of discrimination.
- The Federal Trade Commission (FTC) may have been the most active of all the federal agencies in recent times. The FTC have issued numerous reports (HERE, HERE, and HERE) throughout the years, and I would encourage those interested to review these to better understand how the FTC views the use of AI and automated tools, and the problems that they potentially see with employers using them and automatically making a final decision without any human review. Specifically the FTC is concerned about the use of AI and algorithmic automation and their impacts on discrimination and bias in the employment context.
- In May 2022, the EEOC (Equal Employment Opportunity Commission)and Department of Justice issued guidance on Americans with Disability Act and the use of AI to assess job applicants and employees. We’ve seen some federal legislation activity around this topic, including two pending bills–HR 6590 and S 3572 Algorithmic Accountability Act of 2022. This would essentially direct the FTC to conduct or require impact assessments of automated hiring systems and augmented critical decision processes, otherwise referred to as algorithms. Additionally the EEOC has identified the use of AI in automated employment tools as a part their Draft Strategic Enforcement Plan. Currently the EEOC is seeking comments on its draft plan.
U.S. state developments
There is also an increase in activity at the state level, especially in New York City and California. For example:
- New York City passed local law 1894 A in 2021, which became effective on January 1, 2023. This law creates new obligations for an employer or unemployment agency using artificial intelligence as part of an automated decision tool. In some cases, tools that meet the definition of the law may need to go through a bias audit to determine whether there is any potential for a discriminatory impact. The rules for enforcement are currently under review, and the effective date of the new law has been pushed to April of 2023.
- California also has proposed draft revisions, titled “Employment Regulations Regarding Automated Decision Systems.” This would expand the state’s existing discrimination and employment law and would also expand liability risks of employers and their vendors who use, sell, or administer automated hiring tools that leverage AI. Those rules have not yet been adopted, so we’re monitoring this development as well.
- Other state activity includes Colorado, Illinois, Vermont, and Washington. They’ve created task forces to study AI, and as a result, I expect to see more bills concerning this topic by the end of 2023.
Related: Benefits advisors are burned out: How AI-based tools can help
Navigating compliance in 2023
As we all continue to watch the legislation on this topic, be aware that the definitions of AI can be quite broad and vague. If you are not clear on exactly what might constitute the use of AI in an automated hiring decision tool, consult your legal counsel to help you with those assessments, and review your procedures regarding AI and automation. Employers should also consider frequently reviewing and updating their hiring process and background screening programs in light of both current and future AI employment tool legislation.
Chris Christian is the Director of Compliance at Sterling.
Sterling is not a law firm. This publication is for informational purposes only and nothing contained in it should be construed as legal advice. We expressly disclaim any warranty or responsibility for damages arising from this information. We encourage you to consult with legal counsel regarding your specific needs. We do not undertake any duty to update previously posted materials.