What you need to know about using ChatGPT for benefit administration

We’re not there yet for the reasons outlined in this article, but ChatGPT is a disrupter with the potential to reshape benefits administration.

(Photo: Elnur/Adobe Stock)

Since OpenAI made its ChatGPT available to the public in November 2022 (and the most recent version, GPT-4, in March), HR innovators have considered ways to leverage its ability to take large amounts of data and generate human-like responses to user prompts. While HR innovators explore its uses, another camp of HR leaders is wringing its collective hands over concerns about privacy, accuracy, bias and employee use in the workplace.

An online search for “ChatGPT uses for HR” (notably powered by ChatGPT in most browsers) produces numerous HR applications — including recruiting, job descriptions, compliance and employee engagement. Mostly absent, however, are uses associated with benefits and benefits administration. Pre-ChatGPT, benefits administration service providers were quick to embrace AI, touting it as a key product feature which, in most cases, was little more than a hyper-smart chatbot. So far, our team has not heard of any new benefit tools incorporating ChatGPT.

Benefits are about facts. You offer “Benefit X,” or you don’t. The specifics of Benefit X are consistent based on the individual’s employment status and chosen level of coverage. ChatGPT may help employees understand their coverage options, but many existing and effective tools already support this function. It’s to be seen if ChatGPT can do a better job.

Limitations of ChatGPT for benefits

ChatGPT’s developer OpenAI is forthright about its current limitations. Following is a discussion of how these limitations and subsequent risks pertain to benefits administration.

Accuracy. ChatGPT was not designed to be a source of information. Instead, its primary function is to use language that mirrors human conversation (and sounds convincing). While great for recruiting, convincing an employee they have (or should use) a benefit may be inappropriate and potentially dangerous. Imagine ChatGPT informing an employee that their medical plan covers costly treatments that the employee doesn’t learn is false until the insurance company denies the claim. Or worse, an employee fails to pursue a lifesaving preventative procedure ChatGPT incorrectly advised was not covered. More likely are minor inaccuracies, e.g., informational errors associated with the numerous “Blues” health plans, which share similar branding and coverage but differ based on ownership, location and plan.

Privacy. ChatGPT is not HIPAA compliant. The terms of use allow for the collection and use of personal data, which conflicts with HIPAA-approved uses of personal information. Even if the employer doesn’t share protected employee data, there is the possibility an employee will do so during a “conversation” with ChatGPT. And because OpenAI uses conversations to train ChatGPT, there is a risk of sharing sensitive information with future users. Because of this risk, some businesses do not allow employees to use ChatGPT for work-related functions.

As the technology is evolving, there is much that is not known. For instance, might ChatGPT begin to “profile” employees? Profiling raises significant concerns, especially with multi-faceted uses where the information collected for one purpose is shared for an unrelated inquiry based on a learned profile of an employee “type.”

Bias. Concerns about bias in algorithms have existed since the modern algorithm came into existence. Common examples pertain to hiring practices. LexisNexis reports that lawmakers have introduced AI-related legislation in at least 28 states. Four states — Colorado, Illinois, Vermont and Washington — enacted AI legislation barring AI’s use in job interviews due to its potential for bias. Regarding benefits, one can envision a scenario where ChatGPT fails to provide vital information to an employee with a disability because that disability exists with a tiny percentage of the given population.

Employers are responsible for reviewing their artificial intelligence tools to ensure they do not violate the Americans with Disabilities Act, according to 2022 guidance from the Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ). Practically, this is a tall order, given that a reported violation usually uncovers bias. Employers considering using ChatGPT must first vigorously ascertain the legality of its use, which will be particularly challenging for multi-state and international employers, who must navigate the evolving morass of regulations.

Opportunities for ChatGPT with benefits

Gallagher’s HR technology consulting practice embraces any new technology that effectively supports and advances an organization’s people strategy. We’re excited about the potential of ChatGPT to support HR functions. While its use with benefits raises some concerns, there are opportunities, so long as appropriate guardrails are in place. Consider these uses:

As with any cutting-edge technology, we advise HR leaders to be informed and to assess the benefits and risks of using ChatGPT before proceeding. Core recommendations include:

Related: Finding the right tech-enabled solutions for employee benefits

Artificial intelligence — ChatGPT included — has unlimited potential in the HR arena, including benefits administration. Imagine a tool that analyzes an employee’s claims history, use of voluntary benefits and current biometrics to propose a customized benefit plan. We’re not there yet for the reasons outlined in this article, but ChatGPT is a disrupter with the potential to reshape benefits administration. The possibilities extend as far as the mind can imagine.

Ed Barry, area president, HR Technology, Gallagher