Constructing AI guardrails to mitigate prior authorization process risks
The urgency of establishing and adopting ethical and responsible AI in health care is becoming increasingly evident.
Most recently, health organizations and their stakeholders have rapidly turned to AI and machine learning to expedite high-quality care delivery and optimize administrative workflows. Leveraging predictive AI with accurate and comprehensive data entry capabilities greatly enhances overall health care efficiency. However, as health care providers and health plans increasingly adopt AI, a new concern emerges around the potential overreliance on AI for critical decision-making, especially regarding its use in optimizing antiquated prior authorization (PA) approval processes.
A recent investigation involving the use of AI-based technology to review prior authorization requests found that hundreds of thousands of pre-approved claims that were denied within a two-month period received, on average, just 1.2 seconds of physician review per request.
This event led the American Medical Association (AMA) to advocate for more AI accountability and underscored the need for stricter evaluations by physicians and clinical experts. Without careful considerations like this and the establishment of AI frameworks designed to ensure the responsible and effective integration of health care automation technology, care delivery could suffer significantly.
The critical need for care-driven AI guardrails
Often overlooked, many PA denials are rooted in complex administrative procedures, mainly due to insufficient pre-authorization information, complicating the assessment of medical necessity.
A 2022 AMA study found that 93% of physicians experienced delays in patient care due to sluggish PA approval timelines. That said, although AI can streamline and expedite these processes, it should never be used to automatically deny authorization requests or cause delays in patients receiving appropriate care. Rather, the primary role of AI should be to expedite positive health outcomes and guide providers toward the best treatment options, regardless of the PA outcome.
Employing predefined templates and responsible AI algorithms helps approve PA requests before claims are submitted, reducing the chances of errors leading to the denial of pre-approved claims. For instance, AI can detect that a certain piece of clinical information is missing in the request, and can go further by retrieving that information from the medical record, avoiding a denial on the back end. With comprehensive patient reference data and clinical oversight, AI can provide the most optimal PA recommendations for treatment coverage, getting to “yes” faster.
The four tenets of responsible AI
AI’s effectiveness hinges on its input data, necessitating an acknowledgment of its inherent limitations and biases for responsible usage. By addressing these concerns, responsible AI becomes an invaluable asset in delivering high-quality, value-based care and improving patient outcomes. Let’s explore four crucial tenets of responsible AI:
- Accountability: Establishing responsible AI involves close collaboration between clinical experts and software engineers. This practice ensures that expertise is incorporated into the technology, guiding AI model development, evaluation, and training.
- Transparency: AI-driven decisions must be rooted in clinical data, and transparency is crucial when sharing the decision-making process and information used. This transparency minimizes the possibility of AI models recommending PA denials without tangible or logical reasoning.
- Privacy & security: Clinical oversight is essential to safeguard sensitive patient information. AI models for prior authorization requests should never include patient identifiers. Instead, they should exclusively rely on critical treatment data, such as type, date of care, and diagnosis.
- Inclusiveness & fairness: Social determinants significantly impact patient care and access to it. Responsible AI ensures that at-risk patients influenced by such factors are not automatically denied. By aligning AI models with specific health plan policies, consistent standards are maintained, erroneous care denials are avoided, and medical experts’ judgment and fairness are prioritized across all patient populations.
Related: Don’t fear that AI is here: How the emerging tech can help HR pros
The urgency of establishing and adopting ethical and responsible AI in health care is becoming increasingly evident. Its potential extends beyond diagnosis and treatment, promising to dramatically enhance patient experiences and health outcomes while maintaining patient privacy and data security. By championing responsible AI coupled with advanced clinical innovation and oversight, health care is paving the way for a more patient-centered, precise, and compassionate health care system, fostering a healthier future for all.
Anne Nies, director of Machine Learning, Cohere Health