Health care/AI partnerships face compliance, security obstacles
Would-be innovators of digital health tools face a variety of challenges in the health care space.
Note from health care executives to developers of artificial intelligence-driven health products: You just don’t understand us. And until you do, we will adopt your products cautiously, if at all.
This message came through loud and clear from a recent survey commissioned by the global law firm Ropes & Gray in collaboration with a division of Crain’s New York Business. The survey group itself wasn’t large–284 responded to the emailed questions. But it was a relevant sampling, with 85 percent of respondents hail from either the C-Suite, the board of directors, or a decision-making management position with health care industry employers.
Related: Health care systems doubling down on digital tools
Follow-up interviews with a smaller select group served to support the overall conclusions: health care purchasers worry about the security of their data once it is shared with AI vendors, and they also suspect that most vendors have not taken the time to truly understand the market they are attempting to penetrate.
Among the findings:
- 60 percent said health care’s “strongly entrenched business and reimbursement models make it difficult to bring digital health products to market.” In other words, if the insurance company or payor refuses to reimburse for the service, it won’t get much marketplace traction.
- Nearly half do not believe the AI developers “fully understand the health care market.” These respondents want the product developers to demonstrate health care expertise beyond the specific application of their product or service, and they want them to prove that they can maintain data security.
- Product/service pricing was another obstacle cited by respondents. In an evolving field, purchasers are uncertain about how to evaluate these new offerings.
The report focused on factors that currently are viewed as obstacles to partnerships among players on both sides of the transaction. Many AI product developers expect to fuel their growth through partnerships with customers in the health care field. But many of those surveyed were concerned about partnering with organizations that did not fully appreciate the need to lock down patient data. Responses included:
- 70 percent worry that “a digital health partner would fail to secure or encrypt data prior to it being shared.”
- 34 percent fear an accidental data breach by the AI partner
- 27 percent worry that an AI partner will fail to secure “clear patient consent for the use of data.”
- 21 percent cited a concern that the AI vendor “would share patient data without proper de-identification.”
Still, 42 percent of respondents said it was “likely or somewhat likely they would partner or contract with an AI company over the next year.” About a third said that would probably not happen, with another one in five uncertain about whether such a partnership was in the cards. What do those that are willing to partner with an AI vendor expect in order to move forward: Three primary qualities: evidence of efficacy, complete compliance with all relevant regulations, and complete compliance with all data security and privacy standards.
“All sides now recognize the enormous potential at the intersection of these sectors,” the report concluded. “Digital tech is beginning to understand the importance to providers and pharma of proving efficacy in a clinical setting, while providers are more willing to embrace digital tech. But there are still cultural differences that need to be bridged, and significant roadblocks that must be overcome.”
Read more: