• Agents
  • Docs
  • Pricing
  • Blog
Log in
Get started

Security for apps built with AI. Paste a URL, get a report, fix what matters.

Product

  • How it works
  • What we find
  • Pricing
  • Agents
  • MCP Server
  • CLI
  • GitHub Action

Resources

  • Blog
  • Docs
  • FAQ
  • Glossary

Security

  • Supabase Security
  • Next.js Security
  • Lovable Security
  • Cursor Security
  • Bolt Security

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Imprint
© 2026 Flowpatrol. All rights reserved.
Back to Blog
Case Study

A 4-Digit PIN Was Guarding 3.2 Million Health Records

On March 9, 2026, a hacker posted 14.59 GB of Cal AI user data on BreachForums. The attack vector: an unauthenticated Firebase backend and a PIN with 10,000 possible combinations. Here's what failed and what every builder handling health data needs to do before launch.

Flowpatrol TeamApr 2, 202610 min read
A 4-Digit PIN Was Guarding 3.2 Million Health Records

The attacker called themselves "vibecodelegend"

On March 9, 2026, a threat actor posted 14.59 GB of user data on BreachForums. The app they hit was Cal AI — a popular AI-powered calorie tracking app with over 3 million users. The data included dates of birth, meal logs, health goals, body weight records, and 4-digit PINs. One of those records belonged to a child born in 2014.

The threat actor's handle: vibecodelegend.

That's not a coincidence. It's a message — or at least a confession. The breach happened because Cal AI shipped with the exact pattern that defines low-effort app development: a Firebase backend left in test mode, and an auth system that any script could brute-force in minutes. The attacker didn't need a zero-day. They needed a GET request and a loop.

This is where vibe coding and health data collide. And it's a combination that ends badly every time security is treated as a post-launch problem.


What was actually in those 14.59 GB

Before getting into the mechanics of how it happened, it's worth sitting with what was exposed. This wasn't a list of email addresses. It was health data — the kind people share with apps precisely because they expect it to be private.

Data TypeDetails
Dates of birthFull birthdays — usable for identity verification and targeting
Meal logsDaily food intake, portion sizes, calorie counts
Health goalsWeight loss targets, dietary restrictions, fitness plans
Body weight recordsHistorical weight data over time
4-digit PINsThe primary authentication credential for many users
Children's dataAt least one record for a user born in 2014

The child's record is significant. Health data for minors sits in a different legal category in many jurisdictions. It carries stricter handling requirements, longer retention obligations, and heavier penalties when breached. Shipping a calorie-tracking app without age verification and without tighter controls on any data that might belong to a minor isn't just a security gap — it's a compliance gap.

And then there's the PIN problem. A 4-digit PIN has exactly 10,000 possible combinations. If an attacker can make requests without getting rate-limited — which they could, because Cal AI had no rate limiting — they can try all 10,000 PINs for any account in a few minutes. The PIN wasn't protecting anything. It was theater.


The Firebase backend that anyone could read

The subscription table in Cal AI's Firebase backend was publicly readable without any credentials. No authentication token. No API key check. Just a direct read request to a predictable endpoint, and the data came back.

This is the Firebase test mode default. When you create a new Firebase project and pick "Test Mode" to get started quickly, you get security rules that look like this:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if true;
    }
  }
}

allow read, write: if true. No conditions. No auth check. Every document in every collection, readable and writable by anyone on the internet.

Diagram showing Firebase security rules in test mode (allow all) vs proper authenticated rules

Firebase shows you a warning in the console when these rules are active. But if you scaffolded the project with an AI tool, deployed via CLI, and moved straight into building features — you may never have seen that warning. There's no blocked deploy. No email alert. No hard stop. The rules ship as-is, and the database sits open.

This is the same misconfiguration we covered in our Firebase Misconfiguration article, where researchers found 916 sites with identical rules exposing 125 million records total. The Cal AI breach is that pattern with a name, a date, and children's health data attached to it.

AI code generators make this worse. When you ask an LLM or a vibe coding platform to scaffold a Firebase-backed app, the generated code initializes the Firebase client with your project config and starts reading and writing data. It works immediately — because test mode is open. The AI isn't configuring security rules. It's generating app logic. The security layer is your job, and if you don't know to go looking for it, it never gets done.


The 4-digit PIN: authentication in name only

The second failure compounds the first. Cal AI used a 4-digit PIN as the primary authentication factor for users. On its own, a short PIN isn't automatically broken — it depends entirely on what protects it.

The things that make a PIN safe:

  • Rate limiting — after N failed attempts, lock the account or throttle requests
  • Lockout policy — temporary or permanent lockout after repeated failures
  • Alerting — notify the user when multiple wrong attempts happen
  • Second factor — combine the PIN with something else the attacker can't guess

Cal AI had none of these. With the Firebase backend publicly readable and no rate limiting on PIN verification, an attacker could write a simple script to enumerate PINs for any user account. Ten thousand requests. A few minutes of compute. Full access.

This isn't a sophisticated attack. It's a for-loop.

for pin in range(0000, 10000):
    response = check_pin(user_id, f"{pin:04d}")
    if response.success:
        print(f"Found PIN: {pin:04d}")
        break

AI-generated auth code almost never includes rate limiting by default. It builds the happy path — register, log in, verify — and leaves the adversarial cases unhandled. Rate limiting requires a different mental model: thinking about what happens when someone is actively trying to break your app, not just use it. That's not the default frame when you're prompting an AI to build a calorie tracker.

If you built auth with an AI tool, this is the question to ask: what happens when someone submits the wrong password 10,000 times?


Cal AI acquired MyFitnessPal — which already had its own breach

Here's where the story gets more complicated.

Cal AI had previously acquired MyFitnessPal, the fitness tracking platform that suffered one of the largest health data breaches on record. In 2018, MyFitnessPal had 150 million user records exposed — usernames, email addresses, and hashed passwords. It's still one of the biggest breaches by volume in the health and fitness category.

An acquisition ties two platforms together. Their data practices, their breach history, their user trust — all of it becomes part of the same story. When users sign up for Cal AI, they're now interacting with a company that carries the MyFitnessPal breach history alongside this new one.

For builders thinking about acquisitions — or about building apps in spaces where acquisitions are likely — this matters. The security posture you ship with becomes part of what acquirers inherit. And acquirers carrying breach history have an elevated responsibility to demonstrate they've raised their standards, not lowered them.

The March 2026 breach suggests that elevated responsibility wasn't met.


What to check before you ship health data

If you're building anything that touches health, fitness, weight, diet, or anything people consider personal about their bodies, these are the checks that matter before you go live.

CheckWhat to look forHow to verify
Firebase security rulesNo allow read, write: if true at root levelFirebase Console → Firestore → Rules
Unauthenticated readsCan you read collections without logging in?curl the REST endpoint without an auth token
PIN / short code authIs there rate limiting on verification attempts?Submit 20 wrong PINs in a row and check the response
Rate limiting coverageLogin, PIN verify, password reset, OTP — all endpointsTest each one manually or with a scanner
Sensitive data inventoryDo you know every field that could be health-related?List every collection and document schema
Age verificationCould a child's data be in your system?

Run through this table before you share the link publicly. Not after the first press mention.

Checking your Firebase rules right now

Open a terminal and run this against your own project:

# Realtime Database — open means you get data back, not a 403
curl https://YOUR-PROJECT-ID.firebaseio.com/.json

# Firestore — check a collection you know exists
curl "https://firestore.googleapis.com/v1/projects/YOUR-PROJECT-ID/databases/(default)/documents/users"

If either of those returns data without you passing an auth token, your database is open. Fix the rules before you do anything else.

The locked-down Firestore baseline looks like this — start here and open up only what you need:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // Users can only access their own documents
    match /users/{userId} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;
    }

    // Default: deny everything
    match /{document=**} {
      allow read, write: if false;
    }
  }
}

What you should do right now

The Cal AI breach is a direct consequence of two things going wrong at once: a backend that anyone could read, and auth that anyone could brute-force. Neither failure was exotic. Both were defaults — the defaults you get when you scaffold fast and skip the security pass.

Here's the checklist:

  1. Check your Firebase or Supabase security rules today. Open the console. Read what's there. If you see anything that grants access without checking request.auth, that's your first fix.

  2. Test your own app without logging in. Open an incognito window and hit your API endpoints directly. Open the network tab and replay those requests without the auth cookie. If data comes back, you have an unprotected endpoint.

  3. Add rate limiting to every auth endpoint. Login, PIN verification, OTP submission, password reset — all of them. If you're on Next.js, upstash/ratelimit over Redis is four lines of code. If you're on a vibe coding platform, check what rate limiting they provide and verify it's actually on.

  4. Audit what health data you're storing. List every field. Ask whether you need each one. Data you don't store can't be breached.

  5. Scan before you share the link. Flowpatrol checks for open Firebase and Supabase backends, unprotected endpoints, missing rate limiting, and exposed credentials in client-side code. Paste your URL and find out where you stand before vibecodelegend does.

The attacker's name was a taunt. Don't let it apply to your app.


This case study is based on public reporting by Cybernews, SC Media, HackRead, and Kiteworks. The breach was first reported on March 9, 2026, when the threat actor posted the dataset on BreachForums. The MyFitnessPal 2018 breach is documented in public records including Under Armour's disclosure and reporting by Wired.

Back to all posts

More in Case Study

The 39-Minute Window: North Korea Compromised axios and It Landed in Your node_modules
Apr 2, 2026

The 39-Minute Window: North Korea Compromised axios and It Landed in Your node_modules

Read more
The Base44 Auth Bypass: Wix Paid $80M, Then Researchers Bypassed Every Login With Two API Calls
Apr 2, 2026

The Base44 Auth Bypass: Wix Paid $80M, Then Researchers Bypassed Every Login With Two API Calls

Read more
Polyfill.io: 380,000 Sites, One CDN, One Domain Sale
Apr 2, 2026

Polyfill.io: 380,000 Sites, One CDN, One Domain Sale

Read more
Check registration flow for DOB requirements
Data minimizationAre you storing things you don't actually need?Audit what you collect vs. what features require