Building Trust at Scale: How We Audit Our Own AI Health App

📖 8 min read
Trust / Security
April 2026

Introduction

Trust is the only feature in a health app that actually matters. Everything else — fancy AI, beautiful UI, clever onboarding — collapses the moment a user feels their data isn't safe, their money isn't accounted for, or the app's behavior surprises them in a bad way. So when we ran a hard internal audit of FastAI Health Coach this month, we treated every finding as a blocker and shipped fixes for all of them in a week.

This post is the close-out report, written in plain language for the people who'll actually use the app. We'll walk through what we audited, what we found, and what we did about it. No marketing, no posturing — just the work.

The Audit Lens

We grouped findings into three severity tiers — Critical (ship now), High (ship this sprint), Medium (ship before public launch). The team committed to closing every Critical and every High before the next release went out. Here's how each one resolved.

1 — Passwords That Are Already Compromised Are Blocked at Sign-Up

One of the most common ways online accounts get hijacked has nothing to do with the app being attacked. It's password reuse: the user's password was leaked in some other service's breach years ago, and attackers run automated credential-stuffing scripts against every new app they can find.

The fix is simple in concept: when a user picks a password, check it against the publicly maintained HaveIBeenPwned database of known-breached passwords. If it's already on the list, refuse it and ask for a new one. We turned this on at the Clerk auth layer for FastAI. No friction for users with strong unique passwords; a hard wall for users about to make a critical mistake.

What testers see: if a tester tries a password like Password123! or any common breached password, sign-up fails with a clear message: "Password has been found in an online data breach. For account safety, please use a different password." It's mildly annoying for two seconds and protects the account for years.

2 — Every AI Surface Is Hardened Against Prompt Injection

Prompt injection is the AI-era equivalent of SQL injection. If your app sends user input directly to a language model as part of its instructions, an attacker can paste text that tells the model to ignore its real instructions and do something else — leak data, generate harmful content, exfiltrate the system prompt, you name it.

FastAI has five different AI endpoints that take user content as input — meal descriptions, Coach messages, goal text, free-form journal entries, and onboarding answers. Each one was a potential injection surface. We closed all five with a single shared helper, _prompt_utils.ts, that does three things on every AI call:

This isn't a bulletproof guarantee — no defense against a creative attacker is — but it raises the bar from "trivial" to "genuinely hard," and that's the right place to set the bar for a personal-health app at this stage.

3 — Money Is Reconciled Nightly, Not Trusted

The tricky thing about subscription apps is that there are three sources of truth about whether a user is a paying customer: the app store (Google Play / Apple), the receipt validator (RevenueCat), and your own database (Convex). All three should agree all the time. In practice, they sometimes drift — a refund happens, a subscription expires, a webhook gets dropped — and a paying user ends up locked out, or a non-paying user ends up with premium access.

We had no automated check for this. So we shipped one. A nightly Convex cron now reconciles every active user's entitlement against RevenueCat's source-of-truth using a server-side REVENUECAT_SECRET_API_KEY. Drifts are caught within 24 hours, not 24 days. Users don't have to email support to fix a billing glitch — the system catches it before they notice.

4 — Per-User Rate Limits Stop Runaway AI Cost

An AI app's worst-case cost is unbounded unless you put a cap on it. A single user — accidentally or maliciously — could in theory spam the Coach endpoint thousands of times per minute and run up our Anthropic bill into the hundreds of dollars in an hour.

The audit found we had no per-user rate limit. We added one — Convex-backed, per user, per minute, with a sane default that no real human user will ever hit and a clear error message for anyone who somehow does. Cost ceilings are now bounded by user count, not by user behavior.

5 — Sentry Sees Real Bugs, Not User Mistakes

This one is less about security and more about engineering hygiene — but it absolutely affects trust because it determines how fast we can find and fix actual bugs.

Sentry, our error-tracking tool, was being polluted by user mistakes that aren't bugs at all: someone typing the wrong password, someone trying to sign in with an email that doesn't have an account, someone failing the breached-password check. These are expected error paths handled gracefully by the UI. They have no business in a crash dashboard.

We added a noise filter at the Sentry SDK layer that drops these specific Clerk-handled user errors before they leave the device. Result: when we open Sentry now, what we see is real bugs — not background noise. We find issues faster, fix them faster, and the app gets better faster.

6 — Account Deletion That Actually Deletes

"Delete my account" is the most basic privacy commitment an app can make, and it's the one most apps quietly fail. Often "delete" means "we'll mark you as inactive and keep your data forever, please email us if you want it really gone." We didn't want that. We wanted a tap, a confirm, and a true cascade.

The audit forced us to map every table in our Convex schema that referenced a user — there were nine — and ensure the deletion flow cascaded across all of them. We also fixed an ordering race we'd shipped earlier (where signing out happened after the redirect, leading to a UI flash and a TypeError). The fixed flow now signs out first, then deletes, then redirects. We had a tester run the whole flow end-to-end and confirmed: the data is actually gone.

What This Doesn't Cover

It's important to be honest about the audit's scope. This pass focused on application-layer trust: auth, AI input, billing reconciliation, error visibility, deletion. It did not cover:

These are honest gaps. As FastAI grows, the bar will rise.

Why Publish This?

Most apps don't write up their security work. Either they don't do it (in which case there's nothing to publish), or they do but treat the work as proprietary, or they're worried that publishing the list of fixes is the same as publishing the list of past vulnerabilities.

We chose to publish for one simple reason: a health app asks users to trust it with sensitive personal data. The least we can do is show our work. If you're considering FastAI, you deserve to know what we audit, what we found, and how we resolved it. If you're a developer building something similar, take this as a template — every one of these fixes is worth shipping in your app too.

Try a Health App That Takes Trust Seriously

FastAI Health Coach is LIVE on the iOS App Store (v2.12.2 in review). Android in Closed Testing on Google Play. Built with Coach memory, multi-view meal photos, and the trust hardening described above.

🍎 Download on iOS App Store →