AI May Not Be So Smart When Payors Use It To Deny Medical Claims

from our friends at: Specialty Pharmacy Continuum

Concerns are growing about the increasing use of artificial intelligence to review—and in many cases deny—healthcare claims, sparking a congressional inquiry and lawsuits. Thirty-two legislators sent a letter to the Centers for Medicare & Medicaid Services (CMS) on Nov. 3, 2023, urging increased oversight of the use of these tools in Medicare Advantage (MA) plans.

“In recent years, problems posed by prior authorization [PA] have been exacerbated by MA plans’ increasing use of AI or algorithmic software to assist in their coverage determinations in certain care settings, including inpatient hospital, skilled nursing facilities and home health,” wrote the representatives, led by Judy Chu (D-Calif.) and Jerrold Nadler (D-N.Y.). “Advocates and the media report that the use of such software has led to coverage decisions that are more restrictive than allowed under traditional Medicare rules, as well as more frequent and repeated denials of care.”

Without prohibiting the use of AI outright, the letter continued, “it is unclear how CMS is monitoring and evaluating MA plans’ use of such tools in order to ensure that plans comply with Medicare’s rules and do not inappropriately create barriers to care.”

The lawmakers called on CMS to require plans to:

  • report PA data, including reason for denial, by type of service, beneficiary characteristics (such as health conditions) and timeliness of PA decisions;
  • compare guidance generated by these tools with actual MA coverage decisions; and
  • assess the data that plans are relying on to make these determinations and whether plans are inappropriately using race or other factors in these algorithms, among other measures.

In a lawsuit filed in the U.S. District Court for the District of Minnesota, the families of two Wisconsin men, now deceased, accused UnitedHealth Group of inappropriately denying claims for extended care such as nursing facility stays using its AI model, naviHealth (nH) Predict. The lawsuit noted that when patients appealed the claim denials, either by internal appeal or through federal law judge rulings, they won 90% of the time.

“This demonstrates the blatant inaccuracy of the nH Predict AI Model and the lack of human review involved in the coverage denial process,” the lawsuit contended. “Despite the high error rate, Defendants continue to systemically deny claims using their flawed AI model because they know that only a tiny minority of policyholders (roughly 0.2%) will appeal denied claims, and the vast majority will either pay out-of-pocket costs or forgo the remainder of their prescribed post-acute care.”

UnitedHealth denied the allegations. “The naviHealth Predict tool is not used to make coverage determinations. The tool is used as a guide to help us inform providers, families and other caregivers about what sort of … care the patient may need both in the facility and after returning home,” the company wrote in a statement. “Coverage decisions are based on CMS criteria and the terms of the member’s plan. This lawsuit has no merit, and we will defend ourselves vigorously.”

Attorneys for the families are seeking class action status for the suit, which would not be the first of its kind. In July 2023, such a lawsuit was filed against Cigna, alleging that its PXDX algorithm illegally denied thousands of claims without physician review.

Not All Bad?

AI does have the potential to improve the efficiency and accuracy of claims review, analyzing vast amounts of data—such as medical records, billing patterns and historical claims data—that would be virtually impossible for individual human reviewers to assess. The technology also can reduce the paperwork burden on clinicians and speed up the PA process (Future Health J 2019;6[2]:94-98).

Many of these strengths were highlighted during a session on AI at AMCP Nexus 2023, in Orlando, Fla. The speakers presented case reports demonstrating how the technology can detect claims errors, prevent fraud, waste and abuse, and streamline contract review. But AI was not being used for outright claims denials. Unfortunately, that is not a line that all AI applications respect, noted David Lipschutz, JD, the associate director of the Center for Medicare Advocacy.

“We first encountered this issue a couple of years ago, when we noticed that more and more Medicare beneficiaries were contacting us about having care denied or prematurely terminated in certain care settings, primarily hospital, skilled nursing and home health,” Dr. Lipschutz said. “Based on the volume of reports we are receiving, an increasing number of insurers are delegating more and more of their claims review to software.”

Although these plans claim their AI programs are only used as guidance and do not make coverage determinations, “the evidence makes it abundantly clear that they are, in fact, being used for such purposes,” Mr. Lipschutz said.

The Center for Medicare Advocacy has found that when plans use AI tools, they will issue more denials, as well as more frequent/repeated denials. “If an appeal is filed and an outside entity overturns the denial, the beneficiary could get another denial within days,” he said. “One of the people who contacted us had 10 consecutive denials overturned on appeal for one episode of care in a nursing facility. She would get a denial from her plan and exercise her right to an expedited appeal; the outside entity would disagree with the plan and overrule the denial of care; and then the plan would issue another denial a day or two later. These tools do not appear to be self-correcting.”

The American Medical Association (AMA) House of Delegates adopted a policy statement in June 2023 calling for more oversight of insurers using AI for reviewing PAs and patient claims, and advocating for insurers to require human examination of patient records before denying care.

Keep It Focused

AI algorithms should help with operational aspects of claims review, but should not take the place of clinical judgment, noted Joseph Lassiter, PharmD, the managing director and chief pharmacy informatics officer at Visante. “These tools are designed to help determine signal versus noise, pulling all the documentation from claims [for] someone to review it,” he said. “But we don’t want an algorithm making decisions about medical care in place of clinicians.”

Dr. Lassiter said “almost every vendor” mentioned AI at the ASHP 2023 Midyear Clinical Meeting & Exhibition, in Anaheim, Calif. “This is not going away,” he said. “I would like to see organizations like ASHP join the AMA in taking on the issue from a clinician standpoint.”

Asked to comment, Gina Luchen, PharmD, ASHP’s director of digital health and data, noted that the group “is committed to supporting innovation in ways that ensure the safe and effective use of medications. The ASHP Statement on the Use of AI in Pharmacy prioritizes patient safety, emphasizing the critical role of pharmacists in evaluating, validating and overseeing the appropriate implementation of AI-based technologies across the medication-use process.”