EchoPrime, published in Nature in February 2026, outperforms both task-specific AI tools and previous foundation models across 23 cardiac benchmarks, and its code, weights, and a demo are publicly available.
An echocardiogram is one of the most common diagnostic tools in cardiology: an ultrasound of the heart that reveals how it moves, how its chambers fill and empty, and whether its structure is compromised. Interpreting one requires training, time, and a specific kind of spatial attention, the ability to look at moving images of a beating heart and translate them into a clinical narrative.
Researchers at Cedars-Sinai Medical Center, working with colleagues from Kaiser Permanente Northern California, Stanford Health Care, Beth Israel Deaconess Medical Center in Boston, and Chang Gung Memorial Hospital in Taiwan, have built an AI system that can do the same thing.
EchoPrime, a video-based vision-language model, analyses echocardiogram footage and generates a written report of cardiac form and function. Its findings were published in Nature (volume 650, pages 970-977) in February 2026, under the title “Comprehensive echocardiogram evaluation with view primed vision language AI.”
The scale of the training is what sets EchoPrime apart. The model was trained on more than 12 million echocardiography videos paired with cardiologists’ written interpretations, drawn from 275,442 studies across 108,913 patients at Cedars-Sinai.
No previous AI model for echocardiography has been trained on data of that volume.
What it can do?
Tested across five international health systems, EchoPrime achieved state-of-the-art performance on 23 diverse benchmarks of cardiac structure and function, outperforming both task-specific AI approaches, models trained to do one thing, like measure ejection fraction, and previous foundation models that aimed for broader capability.
The model’s outputs are designed to assist clinicians, not replace them: it produces a verbal summary that cardiologists can review and act on, rather than rendering a diagnosis autonomously.
The research team has made the model’s code, weights, and a working demo publicly available, a decision that reflects a broader shift in AI research towards open publication, and that will allow other institutions to test EchoPrime against their own patient populations.
The context around it
EchoPrime arrives in a year when AI misdiagnosis has been named one of the top patient safety threats by ECRI, the healthcare safety organisation. That context does not undermine EchoPrime’s promise so much as it frames the standard it will need to meet.
The goal is not an AI that sometimes reads echocardiograms accurately, it is one that does so consistently enough to reduce the burden on cardiologists without introducing new categories of error.
Cardiology has been a productive area for AI-assisted diagnostics precisely because the data, ultrasound video, electrocardiograms, imaging, is relatively structured and abundant.
The Cedars-Sinai work is arguably the most thorough attempt yet to turn that abundance of data into a generalised tool. Whether EchoPrime moves from published model to clinical deployment at scale depends on factors, regulatory approval, institutional adoption, liability, that the Nature paper does not address.
But as a demonstration of what is now technically possible in cardiac AI, it sets a new mark.
Get the TNW newsletter
Get the most important tech news in your inbox each week.