716-308-8270 [email protected]

If you searched GoodRx for a depression medication in 2022, Facebook learned about it before your pharmacist did. If you filled out BetterHelp’s intake questionnaire — answering questions about depression, suicidal thoughts, and prior therapy — your responses, with your email address and IP, were forwarded to Meta for ad targeting. If you scheduled an appointment with Cerebral, your name, mental health self-assessment, treatment information, and co-pay amount were shared with TikTok, Google, and Facebook through tracking pixels embedded in your patient portal.

None of these was a hack. The systems worked exactly as designed. The design was the problem.

The three biggest health tech privacy enforcement actions of 2023 — against GoodRx, BetterHelp, and Cerebral — share the same shape. A company makes a privacy promise on one screen. Their engineering and marketing stacks contradict it on the next. Years pass. Millions of users have their most sensitive health information silently transmitted to ad networks. Eventually a regulator catches up. The company pays a fine. The cycle starts again somewhere else.

These weren’t security incidents in the conventional sense. Nobody broke in. The companies built the disclosure pipes themselves, often with one team writing privacy copy while another pasted analytics tags into the same pages. That gap, between the front-of-house privacy story and the back-of-house data flow, is where health tech’s biggest privacy failures live. And the people best positioned to catch it UX professionals are usually only looking at the front.

Three cases, three lessons learned.

GoodRx: privacy copy that didn’t match the page functionality

In February 2023, the FTC brought its first-ever enforcement action under the Health Breach Notification Rule and fined GoodRx $1.5 million. The complaint laid out a six-year pattern, starting in 2017, of the company telling users it would never share personal health information with advertisers — and then doing exactly that.

When a GoodRx user searched for a medication, the page that loaded sent the medication name and the user’s identifying information to Facebook, Google, and Criteo through standard tracking pixels. GoodRx then uploaded lists of users associated with specific health conditions to Facebook’s Custom Audiences system so it could target them — and lookalike audiences resembling them — with ads. The integration sent personal health information by default, with no user awareness and no functioning consent.

The UX-level decision wasn’t dramatic. Somebody added the pixel. Somebody approved the marketing integration. Somebody wrote the privacy copy, probably without checking what the page they’d written it on actually loaded. There’s no single villain in that chain — that’s the point. The failure was structural: marketing’s analytics decisions never got reviewed against product’s privacy promises until a federal agency forced the question.

The lesson is uncomfortable. Privacy isn’t what you put in your copy. It’s what your page does when a user opens it. If you’ve never asked which third-party scripts run on the screens you ship — and what those scripts receive when they do — you’re shipping privacy copy you can’t back up.

BetterHelp: the disclosure on the intake form

A month after GoodRx, in March 2023, the FTC announced a $7.8 million settlement with BetterHelp — the first time the agency required refunds for consumers whose health data had been misused. The data in question wasn’t trivial. BetterHelp asks new users to complete an intake questionnaire covering their mental health history, current symptoms, and prior care. The company had assured users, in its privacy practices and in copy displayed near the intake itself, that this information would be used only for limited purposes related to providing therapy.

In the same page load that displayed those assurances, the questionnaire was sending its data to Facebook, Snapchat, Pinterest, and Criteo, where it was used to build targeted advertising audiences. The FTC found that BetterHelp had also uploaded email addresses of users — including those who had completed intake responses — to Facebook to create lookalike audiences for ad targeting. Refunds eventually went to users who signed up between August 2017 and December 2020.

The intake screen is the worst possible place for this failure to live. Users are at their most vulnerable when answering “have you had thoughts of harming yourself?” — and they’re at their most willing to disclose because they believe the disclosure is in service of getting help. A privacy promise on that screen needs to be more bulletproof than one anywhere else in the product, because the gap between what the user thinks is happening and what is actually happening is widest at that moment.

The design lesson here isn’t just “audit your pixels.” It’s “audit the screens where users disclose the most, first.” If those screens are running the same analytics stack as your marketing pages, you have a problem regardless of what your privacy policy says.

Cerebral: discovering the problem doesn’t undo the harm

Cerebral’s case is structurally different from the other two. The company self-disclosed the breach to the U.S. Department of Health and Human Services in March 2023, admitting that tracking pixels added to its platforms back in October 2019 had been transmitting protected health information to Facebook, Google, and TikTok for more than three years. The notice went out to 3,179,835 patients.

The list of what was disclosed reads like a worst-case textbook example: names, phone numbers, email addresses, dates of birth, IP addresses, demographics, mental health self-assessments, appointment dates, treatment information, insurance plan names, member numbers, and co-pay amounts. For patients who had purchased subscription plans — the patients most engaged with their care — the disclosure was the most complete.

Cerebral isn’t a useful case study because the company is uniquely bad. It’s useful because they caught the problem themselves, and the catching didn’t help. The pixels had been live for more than three years. Three million people had their mental health treatment information shared with ad networks. The HHS notification, the public disclosure, the regulatory pressure that followed — none of it could pull the data back. Disclosure is irreversible. The audit that mattered was the one nobody did before the pixel was added.

In the aftermath, HHS updated its guidance on tracking technologies, and in July 2023 the FTC and HHS jointly warned roughly 130 hospital systems and telehealth providers that they were running the same risk. Many were. Many still are.

The job that designers aren’t doing

There’s a temptation to read these as compliance failures, and they are. There’s also a temptation to read them as marketing or engineering failures, and they are that too. But they’re design failures first.

Every one of these scandals lived in the gap between what users were told on the screen and what the screen was actually doing. Closing that gap isn’t compliance’s job, or marketing’s, or engineering’s. It belongs to the discipline whose remit is the user’s experience of the product, which means it belongs to UX. The designer is the person whose job is to make the screen honest — to make what the user thinks is happening match what is happening. If you’ve never asked which third-party scripts load on your screens, what data they send, and to whom, you’ve been doing half the job.

The minimum useful practice for a designer working on any patient-facing surface in health tech is this: for every screen you ship, be able to answer three questions. What gets loaded on this page that isn’t your code? What does each of those things receive when the page renders? And does that match what the page tells the user is happening? The screens that fail those questions are the screens that produce the next FTC press release.

Practically, that means a few specific habits. Get added to whatever channel marketing uses to request new tags, before they’re added rather than after. Ask engineering to walk you through the actual network requests on three or four of your most sensitive screens — intake, scheduling, results — and write down what you see. When a vendor wants to integrate, treat “does this load on a patient-facing page?” as a design review question, not a security review question. None of these requires a privacy law degree. They require treating the third-party stack as part of your interface, because to the user it is.

GoodRx paid $1.5 million in 2023. BetterHelp paid $7.8 million. Cerebral notified 3.18 million people that their mental health treatment information had been shared with ad networks for more than three years. Every one of those numbers traces back to a decision someone made, or didn’t make, in a planning meeting where the right question wasn’t asked. The next case is being built right now in a sprint at a company we’ll all read about later. The designers in that sprint are the ones with the best chance of stopping it — if they decide that’s part of their job.