Credit: Urupong/Adobe Stock

Many organizations today rightly count fairness and inclusivity as important business goals, and in large part, employees agree with the sentiment. More than half of employed U.S. adults surveyed in 2023 said increasing diversity, equity and inclusion (DEI) at work was a good thing.

Despite the best intentions, however, there are new potential sources of bias in the workplace today: namely, data-driven technology. These strategic enablers, which range from AI recruiting platforms to automated performance management systems, can perpetuate the very thing HR leaders are working so hard to root out.

Recommended For You

The solution, of course, doesn’t lie in fearing or abandoning useful technology. Instead, we should make an intentional effort to understand the underlying issues and correct them. Employee benefits is one area that is incredibly deserving of this attention. Benefits touch some of the most personal aspects of an employee’s life and are one of the top ways employees evaluate empathy and value in the workplace. In 2024, 88% of employees said they would be willing to stay with an employer that empathizes with their needs.

For these reasons and more, many HR and technology leaders want to grow in their understanding of the potential for bias in employee benefits technology.

How bias creeps into technology

We tend to think of data-driven technology as objective tools that don’t contend with prejudicial feelings and emotions the way people do. As a result, it’s easy to believe that data can greatly reduce or maybe even eliminate bias.
But in reality, there are many ways that unintended bias can infiltrate data-driven technology. The powerful tools of business today—like AI—have greater ability to accelerate and exacerbate both human and technology problems.

This is why it’s so important for technology users to understand the number of places bias can hide.

Algorithmic bias: Data used to train AI systems can reflect societal biases. For example, historical data may be skewed towards women utilizing parental leave options, leading technology to recommend it with less frequency to male employees. This can lead to male and non-binary employees missing out on important, family-friendly benefits that may otherwise enhance their loyalty to the company.

Incomplete data: No dataset is ever fully complete, but it’s still important to consider which data may be limited or absent. With incomplete data, it is harder to accurately represent and cater to the needs of a more diverse workforce. A system that doesn’t collect enough location data, for instance, may recommend in-network health care provider options that aren’t reasonable for an employee to actually take advantage of.

Information retrieval bias: If programming is not carefully considered, an AI tool may prioritize certain types of information based on programmed assumptions. An improperly trained AI chatbot, for instance, could provide an employee with only partial information about mental health resources, like an EAP program, something our research indicates is of growing importance to employees.

Language bias: The way AI communicates can be biased. If for example, a generative AI tool relies on gendered language when referring to certain benefits, it can leave employees confused. A father may not understand his benefit eligibility if parental leave is referred to as maternity leave. Or, if a system can only recognize and pronounce Anglo names, employees from other cultures may feel frustrated and excluded.

Each of these potential issues is worthy of focused attention as research continues to show that personalization is incredibly important to employee experience. If biases in data-driven technology create feelings of “otherness,” the personalization piece is not only lost, the experience may be worse than a one-size-fits-all approach.

Stakes of biased technology getting higher

Integrating responsible, unbiased technology is becoming more critical for success as businesses come to rely on technology at exponentially greater rates. Organizational tech stacks are growing in size and complexity as more business units turn to software solutions for improved productivity, accuracy—and perhaps most notably, decision making.

The HR tech stack is a microcosm of the larger enterprise technology expansion. According to HR technology industry analyst Josh Bersin, the average large company has at least 80 HR tools. From payroll automation to benefits administration, HR teams are relying on a range of digital tools to reduce manual tasks while also improving the employee experience.

In addition to the sheer volume of software coming on board, the integration of these tools extends the potential reach of unintended bias. As more tools begin to share data, prescribe automated responses to that data and inform human decisions, it becomes increasingly important to validate the fairness and accuracy of each cog in the system.

Scrutinizing HR tech for fair and equitable AI

For employees to feel that inclusive benefits truly meet their individual needs, the decision-makers integrating benefits administration technology must be on the lookout for inherent bias in the third-party systems they deploy. By performing sound due diligence when partnering with HR tech providers, decision-makers can mitigate the risk of tech biases. What’s more, they may even be able to count benefits technology as a key enabler of workplace DEI.

There is no one-size-fits-all approach to leveraging benefits to achieve equitable outcomes. Employees have different financial situations and risk tolerances, health needs, language preferences, claims histories and more. Scalable AI solutions can be purposefully built to observe and learn, adapting and adjusting to meet employees’ unique needs and preferences.

For example, when AI helps personalize the employee benefits experience by reminding employees about their benefits, employers see an average of a 19% increase in benefits activation. Similarly, when benefits teams leverage technology to personalize benefits communications, they see an average 46% open rate—over double what marketers typically aim for.

These data points show that benefits technology not only helps eliminate confusion for employees, it can also drive increased awareness beyond enrollment. With five generations in the workforce, intelligent HR technology can help make benefits more accessible and equitable to all demographics.

6 queries for potential HR tech partners

Here are some questions to ask of HR tech providers when investigating the potential for hidden bias in their systems:
·         How does your technology team ensure that the development process includes diverse experiences, perspectives and cultural backgrounds? Reviewing a provider’s diversity and inclusion policies can be a good way to evaluate how well an HR tech vendor prioritizes diversity within their developer workforce.
·         Does your development team use any empathy-based or anti-bias approaches to train its AI systems, such as blending machine learning, linguistic expertise, and generative AI to engage with people in increasingly human ways? Taking such an approach improves a technology’s ability to interpret and react to an individual’s specific situations and emotional states so it can have personalized and compassionate (i.e. human-like) engagements with employees. Virtual assistants designed with an empathy-based approach, for example, get to an employee’s “question behind the question” to further anticipate needs and quickly resolve issues.
·         Describe how you build diverse datasets and govern data for quality, accuracy, and representativeness. Asking to take a look at the vendor’s data governance protocols can be a way to further ensure the data used to train AI is representative of a diverse employee population.
·         Do you conduct regular fairness testing to identify and address biases that may emerge as more employees engage with your technology? Continuous testing helps maintain the inclusivity of an AI system over time and is particularly critical with the many different generations, cultures, abilities and work styles collaborating on projects.
·         Are human teams regularly reviewing, assessing, and refining the technology and data models to create and maintain inclusive user experiences? Meeting one-on-one with the humans behind the machines can be a terrific way to assess the value an HR tech vendor places on irradicating bias from its technology.
·         Can you introduce us to clients that are as interested in unbiased, fair technology as we are? Nothing beats a first-person customer account when it comes to figuring out if a potential partnership is a good fit. For benefits technology, specifically, it may also make sense to ask to speak to an end-user employee to get a feel for their personal experience with the system. Case studies, too, can offer a glimpse into what the HR tech provider views as a successful implementation of unbiased benefits technology.

Emulate, but don’t replace

There are many exciting developments on the benefits technology horizon that can make the experience of interacting with the technology incredibly personal, supportive, and human. But, the issue of addressing bias in that same technology provides an excellent example of why the goal of even very sophisticated technology should be to augment human ability, not replace it.

Much like humans, technology isn’t perfect. Yet, there is a lot of promise in harnessing the strengths of one to address the weaknesses of the other. With an intentional approach, technology can help ensure the employee benefits experience serves everyone in the workplace, amplifying empathy and becoming a powerful partner for human connection.

Rae Shanahan is chief strategy officer for Businessolver, a benefits administration technology innovator.

NOT FOR REPRINT

© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.