Gen AI virtual agents may help put the human element back in insurance

Generative AI agents complement human insurance agents by doing the ‘grunt work,' so they can focus on empathy and connection with policyholders.

“I don’t see the need for insurers to drastically reduce their employees servicing clients,” said Jerry Haywood, CEO of boost.ai. “I see the focus of those employees changing from more administrative tasks to more value-add and customer retention tasks.” (Credit: deagreez/Adobe Stock)

“You take those employees today and apply them differently,” said Haywood, “When they [insurers] don’t need to crunch data to understand the analytics and the trends that sit behind it…if that can be done with generative AI, then that claims adjuster can actually spend more around maybe the personal situation that that individual is in and allowing them to build empathy during that journey so that that customer is a customer for life.”

Benefits of AI agents

Insurers and policyholders disagree on AI in underwriting and claims, but insureds welcome its use in consumer-facing areas, such as marketing and customer service. Insurers often use generative AI and conversational AI to give policyholders access to support 24/7 so the software can address all incoming queries. These bots can authenticate the user, pull up their policy documents and answer basic questions about their coverage. Haywood shares that the use of AI in insurance comes down to the intent of the insurance carrier — some use it heavily along the journey process, while others complement conversational AI or generative AI with human agents.

Generative AI is good at summarizing and could go a long away in helping policyholders understand their coverage. Haywood used it himself to identify the differences between his two auto insurance policies. He uploaded eight documents to the overall flow and asked, “Are there differences between my two auto insurance policies that I should be aware of?” Within seconds, the AI-enabled virtual agent highlighted a key difference Haywood had no idea existed.

AI “hallucinations”

Generative AI is not without its pitfalls. Haywood notes AI has some risk for “hallucinations,” which happens when the AI agent creates information that does not exist in a document. These programs work by pulling data from a vast number of documents to generate a response based off of that information.

“We don’t allow that, especially in insurance,” said Haywood. “We’re only pulling off one specific document, so the platform can’t generate content from what it doesn’t have access to… It will generate the content based off that specific document for that policyholder, thereby ensuring a[n] elimination of the [hallucination] risk in the overall journey.”

Human agents irreplaceable

Policyholders can work with a generative AI-enabled virtual agent to discuss the details of their plan or options when querying about coverage. However, there are some scenarios where a human insurance agent should always get involved, such as claims processing.

“It [AI] can demonstrate characteristics of empathy, but is it truly empathetic in its engagement? Less so,” said Haywood. “Where there’s a personal matter that they may want to feel empathy from the organization in which they’re engaging with, an insurer might choose to proactively push it to a human.”

Related: Employees are desperate for Gen AI Guidelines, organizations missing out

Consumers in all industries want to feel heard and understood, especially when going through a difficult time, such as after a house fire or auto accident. In many cases, they need to talk with a human agent to experience good customer service. We all recall the days of screaming “representative” into the phone to be switched from an automated service to an employee, but that’s not the case with virtual agents.

“We’re not forcing organizations or individuals of organizations to have a purely automated conversation… If a customer asks for it or if we detect a view that that might be the best approach, [we] proactively suggest that [a human agent] as the next option.”