AI in Prior Authorization and Claims Review: Federal and State Regulation

0 comments

For millions of patients and healthcare providers, the “prior authorization” process is often a source of frustration—a bureaucratic hurdle that can delay critical care. Now, artificial intelligence (AI) is entering the fray. While AI promises to speed up approvals and reduce administrative burdens, it also introduces significant risks, including biased decision-making and the potential for bulk denials without human oversight.

As insurers increasingly integrate algorithmic tools into claims review, a complex legal battle is emerging between federal standards and state-level consumer protections. Understanding who regulates these tools—and what protections you actually have—is essential for navigating the modern healthcare landscape.

The Federal Landscape: A Fragmented Approach

Currently, there is no single federal standard governing the use of AI in prior authorization. Instead, oversight is split across different agencies depending on the type of insurance coverage.

Private Employer-Sponsored Plans and ERISA

Most workers with employer-sponsored insurance are enrolled in self-funded plans. These plans are governed by the U.S. Department of Labor (DOL) under the Employee Retirement Income Security Act (ERISA).

From Instagram — related to Private Employer, Sponsored Plans

ERISA is a powerful piece of legislation because it generally preempts state insurance laws. This means that even if a state passes a law requiring human review of AI denials, that law may not apply to self-insured employer plans. While ERISA requires a “full and fair” review of all health claims, the government has not yet issued specific guidance on what “full and fair” means when an AI algorithm is making the decision.

However, the DOL has shown it will act against blatant abuse. In one recent case, a third-party administrator (TPA) was accused of violating ERISA by using an automated process to deny claims in bulk without individual medical necessity evaluations. That case resulted in a settlement to compensate the affected enrollees.

The Affordable Care Act (ACA) Floor

For those with Marketplace or off-Marketplace private insurance, the Affordable Care Act established a federal “floor” of protections. This includes standardized internal claims and appeals processes and the right to an “external review” by an independent entity if a claim is denied.

Public Coverage: Medicare and Medicaid

Federal guidance for public programs is more specific but still evolving:

  • Medicare: Regulations from 2023 and guidance from 2024 clarify that Medicare Advantage organizations cannot use software or algorithms to make medical necessity decisions without considering individual circumstances. Any denial based on medical necessity must be reviewed by a healthcare professional. The federal government is also piloting the Wasteful and Inappropriate Services Reduction (WISeR) Model in six states to test AI’s role in traditional Medicare prior authorizations.
  • Medicaid: While current regulations don’t explicitly ban AI, Medicaid managed care rules require that any decision to deny services be made by “an individual” with the appropriate expertise. States can further tighten these rules through contracts approved by the Centers for Medicare & Medicaid Services (CMS).

State-Level Protections: The Front Line of AI Defense

Because federal oversight is fragmented, many states are stepping in to create their own safeguards. These laws generally fall into two categories: broad consumer protection laws and industry-specific health insurance regulations.

Medicare Prior Authorizations 2026 (NEW): 6 States, Avoid Delays!

Broad Consumer Protections

All 50 states have laws prohibiting “unfair or deceptive acts and practices,” enforced by state attorneys general. Some states, such as Colorado and Utah, have specifically amended these broad laws to include general protections against AI-driven harm.

Targeted Health AI Regulations

A growing number of states have updated their “utilization review” standards—the rules governing how insurers decide if a service is medically necessary. Key protections being implemented include:

  • Mandatory Human Review: In Illinois, for example, an adverse determination must be made by a “clinical peer,” and AI cannot be the sole decision-maker.
  • Individualized Analysis: Alabama requires that AI tools base determinations on the enrollee’s unique clinical history and circumstances.
  • Transparency and Disclosure: Utah requires entities using AI for utilization review to disclose this practice to the public, the state department of insurance, providers, and enrollees.
  • Accuracy Audits: California law requires that AI tools be periodically assessed and revised to ensure they remain reliable and accurate.
  • Algorithmic Inspection: In Texas, regulators have the authority to audit and inspect the automated decision systems used by utilization review agents.
  • Anti-Discrimination: Washington state requires that AI tools be applied fairly and equitably, prohibiting direct or indirect discrimination against enrollees.
Key Takeaways: AI in Health Insurance

  • ERISA Preemption: If you have a self-insured employer plan, federal ERISA law often overrides state AI protections.
  • Human-in-the-Loop: Many states and Medicare Advantage rules now require a human clinician to review AI-generated denials.
  • Right to Appeal: Regardless of whether AI was used, consumers generally have the right to appeal denials through internal and external reviews.
  • State Variation: Protections vary wildly by state; some mandate transparency (Utah), while others focus on clinical accuracy (California).

The Path Forward: Guidance and Standardization

As the technology evolves faster than the law, regulators are turning to “model bulletins” to create consistency. By early April 2026, at least 25 states had issued guidance based on a 2023 model from the National Association of Insurance Commissioners (NAIC).

This guidance signals a shift toward expecting insurers to implement strict internal controls and to provide regulators with full access to system validation and testing data. The goal is to ensure that while AI may support the process, it does not replace the clinical judgment required to provide safe and effective patient care.

Frequently Asked Questions

Can an AI legally deny my medical claim?

In many jurisdictions, AI cannot be the sole decision-maker for medical necessity denials. Many state laws and Medicare Advantage regulations require a licensed healthcare professional to review and finalize any adverse determination.

Can an AI legally deny my medical claim?
Prior Authorization Regulations

What should I do if I suspect an AI made a mistake in my prior authorization?

You should exercise your right to a “full and fair review.” Request the specific clinical criteria used for the denial and file an appeal. If the internal appeal is denied, check if you are eligible for an external review by an independent third party.

Do all states have the same AI protections?

No. AI regulation is currently a patchwork. Some states have very specific laws regarding clinical peers and audits, while others rely on general consumer protection statutes.

Related Posts

Leave a Comment