Increased use of AI in health care raises questions about fairness and equity

Algorithms used in patient management can perpetuate racial disparities in health care.

Stakeholders are calling for greater transparency and accountability across the health care industry in how the use of algorithms is audited and how to avoid bias in predictive models. (Photo: Shutterstock)

The use of machine-learning algorithms is helping the health care industry efficiently collect and manage vast amounts of data.

“The amount of data collected about health care in the United States is enormous and continues to grow rapidly,” according to a report in Health Affairs. “Machine learning has become embedded in the health insurance industry for tasks such as predicting early disease onset, determining the likelihood of future hospitalizations and predicting which members will be medication-noncompliant. Algorithms are often developed to optimize interventions to drive improved health outcomes.”

Related: AI, machine learning bring risk of discrimination and bias

Although technology is removing much of the human element from these processes, experts caution that the data generated is only as fair and equitable as the way in which it is developed and used.

“As machine learning is increasingly used in health care settings, there is growing concern that it can reflect and perpetuate past and present systemic inequities and biases,” the report said. “Researchers have begun to evaluate algorithms and their effects on disadvantaged or marginalized populations.”

In one notable study, algorithms used to identify patients for a care management program perpetuated racial disparities, further contributing to racial inequities in health care use and disease outcomes. This research led to immediate calls for greater transparency and accountability across the health care industry in how the use of algorithms is audited and how to avoid bias in predictive models. Researchers outlined several methods of addressing potential bias.

Industry vigilance. One way to check for bias is to examine rates of outreach and engagement in care management programs relative to the proportions of subgroups in the data. If the proportions of those targeted for outreach and engaged in care management do not reflect the underlying population distribution, one might conclude that there was an element of representational bias. However, there may be other reasons to distribute resources equitably based on true care needs, with higher rates of engagement from some subpopulations than others.

Counterfactual reasoning. If a given person were from a different subpopulation but with the same health profile, would he or she have received the same predicted probability of an outcome? By assessing counterfactual fairness, it is possible to examine how a model treats both race and other potentially unmeasured confounding factors that may be correlated to race.

Error rate balance and error analysis. Error rate balance involves comparing false positive and false negative rates for predictions within specified subpopulations. Error rate balance reports patterns that the model is detecting and missing.

“With recent calls for active vigilance of machine learning and its implementations, institutional and industry commitments to increase equity in health care are needed,” the report concluded. “This includes developing and disseminating best practices in bias detection and remediation, as well as the development of targeted programs to reduce bias and promote equity, and deeper involvement and communication with the members and communities served by health plans. With these combined efforts, more equitable health care can be achieved.”

Read more: