Chatbots can raise unique labor and employment law risks

While few areas of the country have passed laws directly regulating artificial intelligence applications to date, introducing a human resources chatbot to your workplace still carries potential risk of violating any number of established labor and employment laws.

The launch of ChatGPT on November 30, 2022, ushered in an explosion of interest by businesses to incorporate large language model artificial intelligence applications into the workplace. To capitalize on efficiencies that this technology presents, many employers have implemented or are considering the use of chatbots to serve human resource functions. Such a program can meet a wide range of needs, from gathering job application information and conducting basic candidate screening to acting as an initial point of contact to answer employee questions related to topics such as employee benefits and company policies or direct users to other resources.

While few areas of the country have passed laws directly regulating artificial intelligence applications to date, introducing a human resources chatbot to your workplace still carries potential risk of violating any number of established labor and employment laws. Over the course of 2022, federal government stakeholders in labor and employment laws, including the Equal Employment Opportunity Commission (EEOC) and the General Counsel of the National Labor Relations Board (NLRB), published guidance addressing how artificial intelligence tools, including chatbots, can run afoul of the Americans with Disabilities Act (ADA) and the National Labor Relations Act (NLRA). Thus, employers can expect conflicts between employee legal protections and artificial intelligence to draw strong interest from law enforcement agencies.

Some of the legal risks associated with chatbots may be more readily apparent than others. One issue applicable to chatbots highlighted by the EEOC’s May 12, 2022 publication, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” relates to health condition questions. The potential violations of law explained by the EEOC included that the ADA restricts an employer’s ability to conduct disability-related inquiries and medical examinations. If an employer’s chatbot directly questions a candidate or employee about his or her health condition or raises inquiries that are likely to elicit information about a health condition, such dialogue may infringe on the employee’s rights under the ADA.

Other legal risks presented through the introduction of chatbot applications may be less obvious. For example, a chatbot that is deployed as an initial point of contact to assist employees with human resource needs may be used by employees to attempt to report concerns or complaints related to discrimination, harassment, or retaliation. Many human resource professionals are well-familiar with the importance of a timely response to such issues to preserve invaluable defenses, among other worthy objectives.

However, many employers have experienced communication challenges that Gen Z can present related to their unique use of emojis and slang, particularly in remote work settings that heavily rely on electronic communication. A chatbot that is not equipped to understand or redirect a potential report of misconduct using this distinct language may miss the need and opportunity to address problematic behavior. This issue highlights the importance to maintain a human presence in human resource functions and avoid the temptation to entirely rely on artificial intelligence for this important company department.

Additionally, risks can arise related to what may happen with the information that employees communicate through a chatbot. The latest large language model technology chatbots utilize machine learning to improve performance over time and use. The machine learning process may rely on assimilating the information that employees submit to it and distributing it externally to expand the information available to the chatbot and increase the application’s accuracy. Companies may question what happens with that information once an employee uses it to interact with the chatbot. The business may risk losing confidentiality and trade secret protections if information employees communicate with the chatbot is disclosed to third-parties through the machine learning process.

A chatbot application may raise other legal concerns related to personal health information employees can submit to it. An employee may voluntarily disclose the details of a health condition when seeking benefits-related information to a chatbot. Even assuming there is not an issue like that discussed above with the employer soliciting the information from the employee, the employer may maintain obligations to protect and retain the employee’s communication as confidential medical information. This hypothetical also presents the possibility that the employer received sufficient notice of a serious health condition supporting a need for leave under the Family and Medical Leave Act (FMLA) or a request for accommodation pursuant to the ADA. If the chatbot does not respond appropriately, the employer may face a claim of interference or denial of rights under these laws.

Employers should also be mindful of how artificial intelligence tools such as chatbots are received by employees. Communication of how and why a company chooses to introduce a chatbot should not be overlooked. The failure to help the workforce understand the benefits of these applications and to address individual concerns can have a detrimental impact on efforts to maintain positive employee relations. Without a concerted investment in your messaging strategy, your latest and greatest artificial intelligence may become a focus for employees to seek outside assistance to address their concerns.

Related: Finding the right tech-enabled solutions for employee benefits

These issues highlight the need to proceed carefully when implementing chatbot technology. Companies should understand how the technology works to address foreseeable questions and issues that may arise. Additionally, contracts that cover the terms and conditions of third-party technology may be critical to establish the company’s rights to information and resources to defend employee claims. Lastly, employers should remain vigilant with a plan to monitor the chatbot’s functioning to ensure compliance with labor and employment law requirements and avoid assumptions that the application works as intended when initially implemented.

Jesse Dill is a shareholder in Ogletree Deakins’ Milwaukee office and a member of the firm’s Technology Practice Group. He practices labor and employment law, with a specific focus on single-plaintiff litigation, class and collective matters under the Fair Labor Standards Act (FLSA), state wage and hour laws, and labor relations pursuant to the National Labor Relations Act (NLRA).