Utah let AI prescribe medicine

The AI prescription pilot shows why the FDA, not a commerce department, should be setting the rules.


Utah let AI prescribe medicine

The case for AI prescription renewals is real. So is the case against trusting a state sandbox to catch the risks.

In January, a security research firm called Mindgard sat down with a chatbot. The chatbot had been built by Doctronic, a health technology startup that had just become the first company in American history to receive state approval to autonomously renew medical prescriptions using artificial intelligence.

Mindgard’s researchers fed the AI a fabricated regulatory bulletin and watched what happened. The system, convinced by a document that did not exist, told them it would triple the standard prescribed dose of OxyContin.

Doctronic and Utah’s Office of Artificial Intelligence Policy were quick to clarify that the vulnerable chatbot was Doctronic’s public-facing tool, not the hardened system running the actual prescription pilot. That distinction matters, and it is worth taking seriously.

But it does not resolve the deeper question that the exchange raises, which is not whether this particular system was compromised, but whether a 12-month state sandbox programme, run by a commerce department with a mandate to encourage AI innovation, is the right mechanism for answering that question at all.

Start with what is genuinely true about the problem Utah is trying to solve. Prescription renewal is, for enormous numbers of Americans, a bureaucratic obstacle that serves no clinical purpose. About half of all people with chronic conditions do not take their medications as prescribed, according to the CDC. The broader challenge of making healthcare accessible and preventative, rather than reactive, is one the tech industry has been grappling with for years.

A significant portion of that non-adherence traces directly to the renewal process: the two-week wait for a primary care appointment, the missed call from the surgery, the lapsed prescription that means starting over. Managed Healthcare Executive reported that Doctronic’s co-founder Matt Pavelle puts the figure at around 30% of all non-adherence.

That is a large number attached to a concrete and fixable problem. Medication non-adherence costs the American healthcare system somewhere between $100 billion and $300 billion annually, depending on which set of studies you consult, and is associated with around 125,000 preventable deaths per year. Those are not made-up numbers from a startup’s pitch deck. They come from peer-reviewed literature and from the CDC.

The access argument for AI prescription renewals is therefore not trivial. It is strongest precisely where the care system is thinnest: rural areas, low-income patients, older Americans who struggle to attend in-person appointments.

Adam Oskowitz, the vascular surgeon who co-founded Doctronic, put it plainly in January: patients are waiting weeks for an appointment to renew a prescription for a medication they have been taking for years, for a condition that has not changed. That wait is not a feature of the system. It is a failure of it. If AI can fix that failure safely, it should.

The problem is the word safely. Doctronic’s benchmark for safety is that its AI matched human clinicians’ treatment plans 99.2% of the time across 500 urgent-care cases. The company shared those figures with Utah regulators, and they were persuasive enough.

But 500 cases is a small number for a system that will eventually process prescriptions at scale. And the 0.8% that did not match represents, at any meaningful volume, a meaningful number of patients receiving something other than what a clinician would have recommended.

More fundamentally, matching what a clinician recommends in a structured evaluation is not the same as being robust against the full range of real-world inputs, including the adversarial ones.

The Mindgard test was not a stress test of the live system; it was a demonstration that the company’s publicly accessible AI could be manipulated with a fabricated press release. That the live system is different is reassuring. It is not conclusive.

What makes the Utah arrangement specifically worth scrutinising is the regulatory mechanism it uses. The state’s Office of Artificial Intelligence Policy, created in 2024, has the authority to waive its own unprofessional conduct laws for companies that enter its regulatory sandbox. That is what it did for Doctronic.

The three-phase pilot begins with physician review of every renewal, which sounds rigorous. Phase three, the operational phase, involves physician review of between five and ten per cent of renewals. The rest proceed autonomously. STAT News raised the question of whether an AI system that evaluates clinical information and issues prescriptions should be regulated as a medical device by the FDA.

That question remains unanswered. Utah does not have the authority to answer it, and its agreement with Doctronic does not require the FDA to be satisfied before the system scales.

The American Medical Association and the Utah Academy of Family Physicians both raised formal objections. The AMA’s CEO, Dr John Whyte, said in a statement that removing physicians from clinical decisions puts patients at risk. The Utah Academy said the programme demonstrated an apparent willingness to move forward with AI without the necessary guardrails.

Those are physician groups, and physician groups are not always disinterested observers when it comes to AI that might reduce demand for their services. But the concern about guardrails is separable from guild interest. A state commerce department has different incentives from a regulator whose primary mandate is patient safety.

Utah’s OAIP is explicitly tasked with encouraging AI adoption. That is fine as a policy goal. It should not be the primary lens through which prescription safety is evaluated. The WHO warned in 2021 that existing policies and regulations were insufficient to protect patients from AI in healthcare. Four years later, that gap has not closed.

None of this means the Doctronic pilot is wrong. It might turn out to be genuinely valuable and genuinely safe. The phased approach, the monthly reporting requirements, the exclusion of controlled substances and injectables, the malpractice insurance holding the AI to the standard of a physician: these are serious design choices, not window dressing.

If the programme runs for 12 months and the data shows clean outcomes, that evidence will matter for every state considering whether to follow.

But evidence is the point. The question is not whether AI can help with prescription renewals. It probably can. The question is who is responsible for generating the evidence that would let us know. A state commerce office running a 12-month pilot with a startup founded in 2023 is not obviously that entity.

The FDA exists precisely because the history of American medicine is full of innovations that seemed obviously beneficial until, at scale, they were not.

The thalidomide that never made it to the US market did not fail because a startup’s pilot showed worrying results. It failed because the FDA’s Frances Kelsey demanded the kind of evidence that a sandbox programme is not designed to produce.

Patients waiting weeks for a prescription renewal deserve a better system. They also deserve to know that the AI renewing their prescription has been tested by someone whose job is safety, not innovation.

Get the TNW newsletter

Get the most important tech news in your inbox each week.