A new Bloomberg-Feroot investigation finds that nine of the 10 largest US health companies are still loading advertising trackers on the very pages where patients log in and register. The story keeps repeating because nothing has stopped it.
There is, by now, a familiar shape to investigations of online tracking. A reporter or researcher loads a website, watches what loads in the background, and discovers, often shockingly, where the data goes.
Bloomberg’s latest such investigation, published this month, has found that almost nothing has changed in the corner of the internet where it would be hardest to defend that fact: the websites of America’s largest healthcare companies.
Working with the privacy-compliance firm Feroot Security, Bloomberg examined the websites of the ten largest publicly traded US health insurance, hospital, and laboratory companies. Nine of the ten had advertising and analytics trackers installed on user-registration or login pages.
About 15 per cent of the broader sample of health websites the team examined could read exact keystrokes on login pages, meaning the third parties involved could in principle collect Social Security numbers, usernames, passwords, email addresses, appointment times, billing details, and medical diagnoses.
It is, depending on how one frames it, either a story about persistence or a story about regulatory failure. Probably both.
How the trackers got there, and why they are still there?
The shape of the problem has been visible for years. An academic study published in Health Affairs found that 98.6 per cent of US hospital websites included third-party tracking.
We wrote in 2022 that 33 of the top 100 US hospital websites had Meta’s Pixel sending data to Facebook every time a patient clicked a button to schedule an appointment. STAT’s investigative team showed in 2023 that almost every hospital website in the country was leaking visitor data to ad-tech vendors despite explicit privacy promises.
Federal regulators followed. The Office for Civil Rights and the Federal Trade Commission jointly warned roughly 130 hospitals and telehealth providers in 2023 that the use of tracking technologies on patient-facing pages risked violations of HIPAA and consumer-protection law.
The healthcare industry pushed back. In June 2024, a federal judge in Texas sided with hospital associations, ruling that HHS had exceeded its authority in trying to extend HIPAA to a category of unauthenticated webpage-tracking. The agency’s enforcement appetite has been visibly chilled since.
The result is a category of online activity that everyone involved knows is sensitive, that has been the subject of academic study, regulatory warning, and federal litigation, and that, on Bloomberg’s evidence, is no less common in 2026 than it was in 2022.
What the data actually flows to
The third parties most commonly identified by Feroot’s tooling are familiar: Meta’s tracking pixel, Google Analytics, LinkedIn Insights, TikTok Pixel, and a long tail of advertising and data-broker vendors.
The data they receive can include the URL of the page, search terms entered into a hospital’s symptom-finder, scheduling actions, and, in keystroke-capable cases, fields entered before submission. Once that data leaves the hospital’s domain, the hospital, by industry consensus, has limited control over what happens to it.
The marketing case for the trackers is simple. They support advertising attribution, conversion measurement, and audience-building, the same functions for which they exist on retail or media websites.
The defence, when offered, is that the trackers are configured not to capture protected health information, and that hospitals have business associate agreements (or do not need them) with the relevant vendors.
Bloomberg’s investigation, like the academic and journalistic ones before it, suggests that this defence is harder to sustain in practice than in theory.
The trackers, once embedded, do what trackers do. Configuring them to behave with the discretion HIPAA expects is a discipline most healthcare websites have not maintained at scale.
There is a soft, almost philosophical version of this problem. Browsing a hospital website is, increasingly, the first step in a healthcare journey. The pages a patient looks at, the symptoms they search, and the providers they consider are, in aggregate, a portrait of their physical and mental health. That portrait does not become less sensitive because it was assembled inadvertently.
There is also a sharper version. The same advertising infrastructure that powers everyday e-commerce is, in this category, ingesting data about pregnancies, mental-health treatment, addiction, and serious diagnoses, often without the patient’s knowledge and certainly without their meaningful consent.
The advertising and data-broker ecosystem that follows, the chain of resale and inference that animates programmatic ads, is opaque enough that even the original tracker vendor cannot fully describe where the data ends up.
The fact that Amazon’s recently expanded Health AI service is designed to operate inside a HIPAA-compliant environment is a useful contrast: when companies want to handle health data carefully, they can. The default for most hospital websites, on Bloomberg’s reporting, is that they do not.
The path that closes this
There are, in principle, three ways the trackers stop. The first is regulatory: an enforcement action by HHS or the FTC that survives appeal and produces an actual settlement of consequence.
The second is judicial: a class action that produces damages large enough to outweigh the marketing utility of the trackers. The third is reputational: a healthcare company concludes that the brand cost of being named in investigations like Bloomberg’s exceeds the conversion lift the trackers deliver.
None of those paths has, until now, closed reliably. The Texas ruling has dampened the regulatory route. The class-action ecosystem is moving but slowly. Reputational damage, in healthcare, has a limited half-life. That is the structural reason the same investigation, with broadly the same findings, has been published every two or three years for the better part of a decade.
The patients on the receiving end of the trackers, meanwhile, are mostly unaware. The most realistic short-term protection, an ad-blocker plus a privacy-first browser, is the kind of self-help that should not be the default solution for a category of data that statute treats as protected. That it is, in 2026, the default solution is the story.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
