Skip links

AI in Healthcare

When AI overrides your doctor's orders

Your doctor prescribed the care. Your insurer said no — and an algorithm made that call. Across the country, major health insurance companies have quietly embedded artificial intelligence into the processes that determine who gets coverage and who gets cut off. For elderly and seriously ill patients, the consequences have been devastating: premature discharges, exhausted appeals, mounting out-of-pocket costs, and in some cases, death. Clarkson Law Firm represents insureds in lawsuits against UnitedHealth Group, Humana, and Cigna for their alleged illegal use of AI and advanced algorithms to systematically deny medically necessary care. If your claim was denied, you may have legal options.

Key Insights

   The algorithm at the center of these cases has an error rate of approximately 80%.

   Most patients don't appeal — and insurers are counting on it.

   Courts are forcing these companies to open their books.

AI in Healthcare Class action Lawsuit
AI in Healthcare Class action Lawsuit
AI in healthcare law firm
AI in healthcare law firm

Doctors make medical decisions. Algorithms don't.

These cases are about something simple. You paid for coverage. Your doctor determined what care you needed. And your insurer deployed a piece of software to override that determination — not because the software had better information, but because it creates a smokescreen of objectivity to deny. That is not what insurance is supposed to do. That is a breach of the contract you signed and, in many cases, a violation of federal law.

We are not arguing against technology in healthcare. AI and machine learning, deployed with proper safeguards and genuine physician oversight, can play a legitimate role. But when the algorithm is the decision-maker and the physician review is a formality — a rubber stamp applied to hundreds of thousands of denials at a rate of little more than a second per claim — patients lose.

The federal government is now considering expanding AI-based review into traditional Medicare. We know what that looks like. Whatever form this technology takes, the principle does not change: a human being must always make a meaningful, individualized decision about another human being's care. That is what the law requires. That is what our clients are owed.

"The AI wave is already reshaping healthcare, and it shows no signs of slowing down. We must not allow innovation to supersede the principles of fairness, clinical judgment, and due process that healthcare depends on."

Ryan Clarkson - Managing Partner, Clarkson Law Firm, quoted in The Contrarian

Our AI in Healthcare Legal Team

PRESS & MEDIA

Tell us how AI affected your treatment

FREQUENTLY ASKED QUESTIONS

Clarkson has filed class action lawsuits against all three insurers, alleging that they used AI tools to systematically deny or cut short medically necessary care for Medicare Advantage patients. The lawsuits allege that these companies deployed algorithms they knew to be highly inaccurate rather than relying on genuine physician review.

nH Predict is an AI tool developed by naviHealth, a company acquired by UnitedHealth Group. The tool generates predictions about how much post-acute care — skilled nursing, inpatient rehabilitation — a patient is expected to need. UnitedHealth and Humana both used it to determine coverage cutoffs. Plaintiffs allege the tool routinely contradicted treating physicians and carried an error rate of approximately 80% when denial decisions were challenged at the Administrative Law Judge level.

Primarily Medicare Advantage enrollees — seniors aged 65 and older who chose private insurance plans in lieu of traditional Medicare. These individuals paid premiums for coverage that, in many cases, their insurers refused to honor when they needed post-acute care following hospitalization. The lawsuits also cover their families and estates in cases where patients have since died.

You may qualify to be part of a class action if you were a Medicare Advantage enrollee with UnitedHealth, Humana, or Cigna and had post-acute care coverage denied or cut short in ways that contradicted your treating physician's recommendations. Contact us to discuss your situation.

The active Clarkson litigation focuses on Medicare Advantage, which is the private insurance alternative to traditional Medicare. However, similar algorithmic tools may be used across commercial insurance products, and related concerns apply more broadly. The federal government is also piloting AI-based review within traditional Medicare.

These lawsuits are establishing legal precedent for how courts treat AI-driven healthcare decisions. They are compelling companies to produce internal records that reveal how these tools were designed, deployed, and evaluated. The outcomes will shape what insurers are and are not permitted to do with AI in the claims process, affecting every patient in the country who relies on health insurance coverage.

OTHER AI LEGAL FOCUS AREAS

AI Harm

In some of the most devastating cases, AI chatbots have encouraged vulnerable users to take their own lives, exposing the deadly consequences of deploying AI without adequate safeguards.

AI and Intellectual Property Theft

AI companies have scraped the creative work of writers, artists, and developers without consent or compensation to train their products.

AI Washing

AI Washing

Companies are deceiving consumers and investors by exaggerating or fabricating the AI capabilities of their products and services.

AI in Healthcare

Insurance companies are deploying AI systems to wrongfully deny patient claims, overriding physician judgment and putting profits above care.

AI Taskers and Trainers

The human workers who label data and train AI models are being misclassified as independent contractors, denying them wages, benefits, and legal protections they are owed.

Join our mailing list to stay current with Clarkson’s AI-related cases.

Name(Required)

By submitting this form, you agree to the Terms of Service and Privacy Policy.