Doctors warn AI-driven Medicare Advantage tools are blocking senior care via prior‑authorization denials

8 Min Read

Introduction:

Artificial intelligence (AI) is revolutionizing the healthcare industry, offering the promise of faster diagnoses, streamlined paperwork, and predictive analytics that can save lives. But for many doctors treating senior patients under Medicare Advantage plans, that same technology is now raising alarms.

A growing number of physicians and patient advocates are sounding the warning: AI-driven tools used by private insurers to manage Medicare Advantage plans are being weaponized—not to improve care, but to deny it. At the heart of the issue lies the practice of prior authorization, a cost-control measure meant to prevent unnecessary treatments but increasingly blamed for delaying—or outright blocking—necessary care for vulnerable older adults.

Doctors argue that algorithms designed to flag “unnecessary” care are denying coverage based not on medical judgment, but on probability models that prioritize cost savings over patient well-being. The result? Seniors across the U.S. are being caught in a tangled web of automation, red tape, and corporate decision-making that leaves them sicker and more at risk.


What Is Prior Authorization and Why Does It Matter?

Prior authorization is a process in which healthcare providers must get approval from an insurance company before prescribing a medication, test, procedure, or treatment. It’s meant to ensure medical necessity and avoid waste. However, when AI tools are used to make those decisions without meaningful oversight from qualified clinicians, the process becomes less about care and more about cost containment.

In traditional Medicare, prior authorization is rare. But in Medicare Advantage (MA)—a privatized alternative run by insurance companies—it’s a common and growing practice. About 33 million seniors are enrolled in MA plans as of 2025, representing more than half of all Medicare beneficiaries. These plans often tout extra benefits and lower costs, but they also come with stricter utilization controls—including AI-powered prior-authorization tools.


The AI Problem: Algorithms in the Driver’s Seat

Over the past few years, major insurers managing Medicare Advantage plans have adopted AI software to automate prior authorization decisions. These tools analyze vast troves of data—clinical guidelines, treatment costs, patient histories—and attempt to predict which procedures are “likely” to be unnecessary.

But here’s the problem: AI is not a doctor. It cannot see patients, interpret complex symptoms, or assess subtle clinical changes. And yet, it’s increasingly making final decisions about what care is approved and what’s denied.

Several companies—some private, some publicly traded—have developed these tools. They operate behind proprietary systems that are not transparent to doctors or patients. Many use natural language processing (NLP) and machine learning to evaluate claim submissions and spit out denials almost instantaneously, often without any human review.


Denial by Algorithm: Real-World Impacts

Consider the case of Margaret Taylor, an 83-year-old patient in Ohio with congestive heart failure. Her doctor ordered a repeat echocardiogram to evaluate new symptoms of breathlessness. Within minutes, her Medicare Advantage plan denied the request, citing a “lack of new clinical justification”—even though her symptoms had clearly worsened.

Dr. Jonathan Leary, her primary physician, said, “The decision was made by a computer. I couldn’t get a human on the phone for two days. By the time we got approval, she had already been admitted to the hospital with fluid overload.”

Margaret’s story is far from unique. A 2024 report from the Office of Inspector General (OIG) found that 13% of denials by Medicare Advantage plans were for services that met Medicare coverage rules and would have been approved under traditional Medicare. Many of those denials are now driven by AI-powered systems.


Doctors Speak Out: A Broken System

Physicians across the country are raising concerns that the current system is unmanageable and dangerous. Many say they are spending more time fighting denials than actually treating patients.

“We’re being overridden by machines,” said Dr. Priya Nair, a geriatrician in Los Angeles. “And these aren’t minor decisions. These are things like MRIs, physical therapy, home nursing care—vital services that seniors rely on.”

In response to growing backlash, professional medical groups like the American Medical Association (AMA) and the American Hospital Association (AHA) have called for stricter regulations on AI usage in prior authorization. They argue that:

  • AI should assist, not replace, human clinical review.
  • Denials should be explainable and transparent.
  • Patients and providers should have access to appeals in real time.

Insurers Respond: Efficiency Over Emotion?

Insurance companies argue that AI tools are essential to control costs in an overburdened system. They maintain that these systems improve efficiency, reduce fraud, and ensure care is used appropriately.

“AI does not make final decisions—it flags requests for further review,” said a spokesperson for one of the largest Medicare Advantage insurers. “Ultimately, a human reviews the case.”

But whistleblowers and internal reports suggest otherwise. A 2023 class-action lawsuit alleged that certain MA plans denied up to 30% of claims using automated tools without meaningful human oversight. In some instances, denials were generated in seconds, with rubber-stamp approvals by non-clinical staff afterward.


Political Momentum for Change

The outcry from doctors, patients, and advocacy groups is beginning to catch the attention of lawmakers. A bipartisan group in Congress has introduced the “Ensuring Patient Access to Critical Services Act”, which would:

  • Require real-time reporting of denial rates by AI tools.
  • Mandate clinical oversight of all AI-driven decisions.
  • Create a transparent appeals process that includes patient input.

Meanwhile, the Centers for Medicare & Medicaid Services (CMS) has proposed new guidelines aimed at curbing “algorithmic overreach.” These rules would make it clear that all decisions affecting care eligibility must involve a licensed clinician.


Rebuilding Trust in AI in Healthcare

Despite the concerns, many healthcare leaders believe AI still holds tremendous promise—if used responsibly. Properly designed, AI can reduce administrative burdens, identify fraud, and even improve diagnostic accuracy.

But trust must be rebuilt. That means:

  • Transparency: Patients and doctors must know when AI is involved in a decision.
  • Accountability: There must be recourse when an AI decision causes harm.
  • Human oversight: Algorithms should support, not replace, medical judgment.

Healthcare is not a one-size-fits-all industry. The human body is complex, and so are the social and emotional factors that shape treatment outcomes—especially in elderly patients. No machine, no matter how advanced, can fully understand the nuances of an 80-year-old with multiple chronic conditions and limited mobility.

Share This Article