Will AI transform health care? Most doctors are hopeful, but lack trust (for now)
Six in 10 clinicians believe AI can help with decision-making, however, 55% said it is not yet ready for medical use, and for those with 16-plus years of experience, the skepticism level was even higher, says a new survey.
Navigating the health care system is complicated and a single error can cost your company and your employee a ton of money. A health care guidance platform is designed to help your employees make better health care decisions. These platforms can vary by their offerings, but the best ones offer personalized services to your employees and are always available from a smartphone app. They should handle both inbound and outbound engagement campaigns that educate your employees and stay top of mind. Many even use advanced technologies like artificial intelligence to discover ways to save a company money by digging into health claims data.
Despite the recent buzz about artificial intelligence, many health care professionals and patients are skeptical about its use in medicine.
Although the majority of doctors believe that AI has the potential to transform health care, many still believe the technology is not yet ready and contains biases, the recent Reimagining Better Health study from GE HealthCare found. The findings come as a number of health care giants continue to look at and experiment with AI models, including generative technologies such as ChatGPT and conversational AI, to improve patient experience and outcomes, automate tasks and enhance productivity.
Six in 10 clinicians believe the technology can help with decision-making, 54% said it enables faster health interventions and 55% suggest it can help improve operational efficiencies. However, 55% of survey respondents said AI technology is not yet ready for medical use and 58% implied that they do not trust AI data. For clinicians with more than 16 years of experience, the skepticism level was even higher, with two-thirds lacking trust in AI.
Respondents cited two key reasons for their distrust:
- First is the potential for algorithms to produce unfair or discriminatory outcomes because of such factors as incomplete training data, flawed algorithms or inadequate evaluation processes. As many as 44% said the technology is subject to built-in biases.
- Second, the study found that only 55% of surveyed clinicians believe they get adequate training on how to use medical technology.
“As an industry, we need to build clinician understanding of where and how to use it and when it can be trusted fully vs. leaning on other tools and human expertise,” said Taha Kass-Hout, chief technology officer for GE HealthCare. “I refer to this as ‘breaking the black box of AI’ to help clinicians understand what is in the AI model.”
Related: Dr. ChatGPT: How accurate are the new AI tools with medical capabilities?
As health care systems around the world face extreme pressures, clinicians are burning out and considering leaving the industry. According to the World Health Organization, there could be a shortage of 10 million health workers by 2030, when 1.4 billion people will be 60 or older. In such scenarios, AI-driven systems could eliminate repetitive low-level tasks to help workers focus solely on patient care.
“While we are still in the early stages of seeing the true impact of these technologies, with appropriate human supervision, AI can help to reduce the burden of data query and analysis on clinicians so that they can be focused on what really matters: Improving patient outcomes,” Kass-Hout said.