Shaping the future of AI in health care: Senate looks to set ‘guardrails’ for patient safety

Artificial intelligence could reduce clinicians’ administrative work and accelerate drug innovation, but the new technology needs oversight, said lawmakers, who met last week to discuss benefits as well as possible threats of AI.

Senator Ed Markey (D-Mass.) chairs the Subcommittee on Health, Education, Labor & Pensions. Photo by Diego M. Radzinschi/THE NATIONAL LAW JOURNAL.

The Biden administration recently announced its intention to develop safeguards for the promising but uncertain future of artificial intelligence. Last week, legislators began looking at how to implement the directive in health care.

Senators from both parties discussed the potential benefits and possible threats of AI during a Health, Education, Labor & Pensions subcommittee hearing.

“If we’re to write rules surrounding AI, let’s be careful not to destroy innovation or allow those who would harm us to get ahead of us,” said Sen. Roger Marshall, R-Kan., who is a physician. “After all, artificial intelligence and machine learning had been making remarkable discoveries and improving health care for some five decades without much government interference.”

AI could be a boon for the health-care sector, cutting down clinicians’ administrative work, reducing spending and accelerating drug discovery. Keith Sale, M.D., vice president and chief physician executive of ambulatory services at the University of Kansas Health System, told lawmakers that AI documentation allows him to shave hours off time spent in the clinic usually dedicated to reviewing notes and inputting them into the electronic health record.

“It’s a tool,” he said. “It is not something that should replace what I decide in practice or how I make decisions that affect my patients. So ultimately, it is designed to enhance my practice, not replace me in practice.”

AI also could allow clinicians to access and analyze data they couldn’t on their own, and the technology could serve as a platform for sharing information and evaluating the performance of the health-care system, said Kenneth Mandl, a professor at Harvard and director of the computational health informatics program at Boston Children’s Hospital.

“Amazing concept, to go from spending 80% of our GDP down to maybe 8% or 10%, like the rest of the world,” said Sen. John Hickenlooper, D-Colo. “And that’s one way we can move in that direction.”

Related: Biden to HHS: Create an AI task force to keep health care ‘safe, secure and trustworthy’

However, without adequate guardrails, AI could cause harm to patients or exacerbate existing inequities, said Sen. Edward Markey, D-Mass., who chairs the subcommittee.

“I have concerns, because when we talk about the promises of AI, we need to also talk about its risks,” he said. “We have learned time and again that left to self-regulate, big tech puts profit over people almost every time. We cannot afford to repeat that mistake by not regulating artificial intelligence now.”

Dr. Thomas Inglesby, director of the Johns Hopkins Center for Health Security, agreed.

“We only have one chance to get things right for each new open-source model release,” he said. “These measures taken together will reduce the risk of high-consequence malicious and accidental events derived from AI that could trigger future pandemics.”

“Those closest to the industry know the challenges [and] they understand the opportunities and the risks the best,” said Marshall. “They also know the most practical and impactful solutions as we look for guardrails that protect Americans but at the same time promote innovation.”