HHS’s new AI health care task force is taking shape, launching this fall

Last fall, the Biden administration issued an executive order giving the Department of Health and Human Services the daunting task of putting guardrails around increasingly advanced artificial intelligence health care models.

Credit: ipopba/Adobe Stock

Although everyone agrees that artificial intelligence will have a profound impact on the future of health care, no one is sure exactly what that impact will be. AI likely will affect the industry in ways that are difficult to even anticipate today.

The Biden administration attempted to get ahead of the curve last fall by issuing an executive order giving the Department of Health and Human Services the daunting task of creating a regulatory structure to oversee the use of AI in health care. A taskforce comprising senior administration officials is making progress on the plan, which is due this fall, said Greg Singleton, chief artificial intelligence officer for HHS.

“The real-time learning environments — we’re going to have to come up with and develop some sort of assurance, monitoring, risk-management practices around — just to kind of put a buffer around those, so we’re comfortable with them,” he said during the recent Healthcare Information and Management Systems Society conference. “And a lot of good work is going into that.” “There are four things that we’re working on for the executive order’s [interim] end-of-April deadline,” Micky Tripathi, the national coordinator for health information technology at HHS and co-chair of the task force responsible for devising the plan, told the Washington Post. “One is a plan to look at the uses of AI in public benefits and HHS activities. The second is an overarching strategy for being able to assess the quality of AI tools. The third is specific to the National Institutes of Health regarding synthetic nucleic acid screening and other things like that … Finally, we’re looking to see if there are any more appropriate actions to advance nondiscrimination compliance.”

Although individual agencies have issued AI rules in the past, regulators are faced with putting guardrails around increasingly advanced models, including some that automatically learn and evolve in real time without human feedback. Specific working groups are focusing on core issues, including drugs and devices; research and discovery; critical infrastructure; biosecurity; public health; health care and human services; internal operations; and ethics and responsibility, The United States has taken a largely hands-off approach to AI oversight so far, even as the technology becomes increasingly pervasive. Regulators and legislators have said they don’t want to inadvertently tamp down innovation in the private sector as the nation — home of most of the heaviest hitters in AI development, such as Microsoft, Alphabet and Meta — competes for AI supremacy.

Nevertheless, federal agencies have taken several steps they can build on to oversee AI in the health-care space, Singleton said:

Related: 28 health care providers commit to Biden’s AI safety pledge

“Those are just examples of how we’re approaching it, looking at what we have in our stable and where that aligns with our values,” Singleton said. “Is it the end-result answer? Does it give the answer for what AI should be in 10 years? No.”