• Agents
  • Pricing
  • Blog
Log in
Get started

Security for apps built with AI. Paste a URL, get a report, fix what matters.

Product

  • How it works
  • What we find
  • Pricing
  • Agents
  • MCP Server
  • CLI
  • GitHub Action

Resources

  • Guides
  • Blog
  • Docs
  • OWASP Top 10
  • Glossary
  • FAQ

Security

  • Supabase Security
  • Next.js Security
  • Lovable Security
  • Cursor Security
  • Bolt Security

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Imprint
© 2026 Flowpatrol. All rights reserved.
Back to Blog

May 11, 2026 · 8 min read

Three Apps. Three Firebase Breaches. One Rule That Caused All of Them.

Cal AI lost 3.2M health records. Tea leaked 72,000 government IDs. 900+ sites exposed 125M records. The root cause was identical every time: allow read, write: if true. Here's how to fix it in minutes.

FFlowpatrol Team·Security
Three Apps. Three Firebase Breaches. One Rule That Caused All of Them.

Everything works. That's the problem.

You built the app. Firebase is wired up. Data flows in, data flows out. There are no permission errors, no auth failures, no red banners. The app feels ready to ship.

"Everything works" is what Firebase Test Mode is designed to deliver. No friction during development — because there are no security rules checking anything. Your database is open to the entire internet, and nothing in your normal workflow will tell you.

Three recent breaches trace back to this exact starting point. Cal AI lost 3.2 million health records, dates of birth, and 4-digit PINs to BreachForums in March 2026. Tea exposed 72,000 images including 13,000 government IDs and 1.1 million private messages in July 2025. A 2024–2025 sweep of over 900 sites found 125 million user records — 19 million of them plaintext passwords — all sitting open on misconfigured Firebase instances.

Same rule. Same mistake. Three completely different apps.


Why Test Mode ships to production

Firebase gives you a choice when you create a new project: locked-down rules, or Test Mode. Test Mode is one click. It works immediately. And it looks like this:

// Firestore Test Mode — what ships by default
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if true;
    }
  }
}

That single rule says: anyone, anywhere, no authentication required, can read or write any document in your database. It is not a development sandbox. It is your production database, open to the internet.

Firebase does warn you — a yellow banner appears in the Console Rules tab when your database is public. But that banner only shows if you visit the Rules tab. Deploy via CLI, scaffold with Lovable or Bolt, build with Cursor? You may never see it. There's no email alert. No deployment blocker. No "are you absolutely sure?"

AI scaffolding tools compound the problem. When Lovable, Bolt, or Cursor generate a Firebase app, they wire up reads, writes, queries, and mutations. They don't write security rules. The app works perfectly because the database is open — and the AI has no way to distinguish "works" from "works securely."

Heads up

The Firebase Console shows a 30-day countdown when your database is in Test Mode. This is not a hard deadline. After 30 days, Firebase sends a reminder email. Your rules do not change. Your database stays open. The countdown is cosmetic — your data remains public until you fix the rules yourself.

Firebase Test Mode with allow read/write: if true open to anyone on the internet, compared to a properly locked-down Firebase project requiring authentication and ownership checks
Firebase Test Mode with allow read/write: if true open to anyone on the internet, compared to a properly locked-down Firebase project requiring authentication and ownership checks


The three surfaces nobody covers

Here's what most Firebase tutorials skip: Firebase has three separate storage products, and each has its own rules file. Securing one does not secure the others.

ProductWhat it storesWhere rules liveCommon mistake
FirestoreStructured documents (JSON)Firestore → Rules tabTest mode default never changed
Realtime DatabaseJSON tree (original Firebase DB)Realtime Database → Rules tabAssumed same as Firestore; never checked
StorageFiles, images, uploadsStorage → Rules tabDatabase secured but Storage forgotten entirely

Tea is the canonical example of the third row. The app's database rules had been secured — someone had checked them. But Storage is a different tab, a different product, and a different rules file. The check stopped one step short.

The Storage bucket — where 13,000 government IDs, 59,000 profile photos, and 1.1 million messages lived — shipped with the default open configuration:

// Firebase Storage default (Test Mode) — what Tea shipped:
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    match /{allPaths=**} {
      allow read, write: if true;
    }
  }
}

That's one line that made 72,000 files downloadable by anyone who knew the bucket URL. The database rules didn't matter. Storage had its own open door, and nobody had looked for it.


Minimum viable rules by app type

You don't need complex rules to be safe. You need a locked-down default with specific access opened only where you actually need it. Start with if false at the root. Open access explicitly. Never the other way around.

App typeMinimum Firestore ruleMinimum Storage rule
Consumer / socialAuth required for reads; owner-only for writesrequest.auth.uid == userId on user files
Health / fitnessOwner-only read and write; no public collectionsNo client reads for health data — Admin SDK only
Identity verificationServer-side only (allow read: if false)Gov IDs: write once, allow read: if false always
SaaS / multi-tenantAuth + org membership check on every collectionSigned URLs served server-side; no direct client access

Consumer app — user profiles private, public content readable:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {

    match /users/{userId} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;
    }

    match /posts/{postId} {
      allow read: if true;
      allow write: if request.auth != null
                   && request.auth.uid == resource.data.authorId;
    }

    // Deny everything not explicitly opened
    match /{document=**} {
      allow read, write: if false;
    }
  }
}

Health app — no public reads, no cross-user access, nothing writable without a session:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {

    match /records/{userId}/{document=**} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;
    }

    // Deny everything not explicitly opened
    match /{document=**} {
      allow read, write: if false;
    }
  }
}

Identity verification — government IDs and verification selfies should never be readable from the client. Read them server-side via the Admin SDK only:

// Firebase Storage rules for an identity verification app
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {

    // Gov IDs and selfies: write once on upload, never client-readable
    match /verifications/{userId}/{file} {
      allow read: if false;
      allow write: if request.auth != null
                   && request.auth.uid == userId;
    }

    // Profile photos: any authenticated user can view, only owner can update
    match /profiles/{userId}/{file} {
      allow read: if request.auth != null;
      allow write: if request.auth != null
                   && request.auth.uid == userId;
    }

    // Deny everything else
    match /{allPaths=**} {
      allow read, write: if false;
    }
  }
}

The pattern is consistent: allow read: if false for anything that should only be read server-side. Once a user uploads an ID image, that image gets processed and locked. An attacker with any valid Firebase token shouldn't be able to read another user's government ID — and with allow read: if false, they can't.


The 60-second test: check all three surfaces

Check if your Firebase app is open right now. Run each of these — one per product — replacing YOUR-PROJECT-ID with your actual project ID.

Realtime Database:

curl "https://YOUR-PROJECT-ID.firebaseio.com/.json"

Firestore:

curl "https://firestore.googleapis.com/v1/projects/YOUR-PROJECT-ID/databases/(default)/documents/users"

Storage (substitute a real file path from your bucket):

curl -o /dev/null -s -w "%{http_code}" \
  "https://firebasestorage.googleapis.com/v0/b/YOUR-PROJECT-ID.appspot.com/o/users%2Ftest.jpg?alt=media"

What you want to see for each:

# Realtime Database — good: {"error":"Permission denied"} # Firestore — good: {"error":{"code":403,"message":"Missing or insufficient permissions."}} # Storage — good: 403 # Any of these means your data is public: # - JSON user data in the Realtime Database response # - Firestore documents returned without a 403 # - Storage returning 200 with file content

If any of those tests returns data instead of a permission error, that surface is open.

Tip

Run all three tests, not just the one for the product you think you're using. Many Firebase apps use Firestore as the primary database and Firebase Storage for file uploads without realizing both need separate rules. Tea secured one and forgot the other. Don't stop after the first clean result.


Five steps to audit right now

You built something real. Now make it bulletproof.

  1. Open the Firebase Console and visit every Rules tab separately. Firestore, Realtime Database, and Storage are three different tabs with three different rule sets. Check all three. If you see allow read, write: if true on a wildcard match at the root level, fix it before anything else.

  2. Run the 60-second test above. Copy your project ID, run the three curl commands, and read the responses. A permission error on all three means your rules are active. Anything else means data is currently public.

  3. Search your rules for if true. If if true appears on a catch-all pattern like /{document=**} or /{allPaths=**}, you have Test Mode active on that surface. Every document or file in that path is readable by anyone on the internet.

  4. Check Storage separately — even if you secured Firestore. If your app accepts any file uploads — avatars, documents, verification photos, anything — open the Storage Rules tab and verify it's not on the default open configuration. This is the step Tea missed, and it's the most commonly skipped step.

  5. Scan your full app. Open Firebase rules are one check among many. Flowpatrol finds them automatically alongside missing auth on API routes, exposed secrets, and misconfigured backends. Paste your URL and see what comes back — before your users do.


Sources: Cal AI breach — Kiteworks, March 2026; Tea App breach — TechCrunch, July 26, 2025; NPR, August 2, 2025; Firebase mass misconfiguration — SecurityWeek, March 2024; The Register, March 2024.

Back to all posts

More in Security

SSRF in 60 seconds: the link preview that steals your AWS keys
May 4, 2026

SSRF in 60 seconds: the link preview that steals your AWS keys

Read more
Your code passed the linter. Your app failed a 2-minute curl test.
May 4, 2026

Your code passed the linter. Your app failed a 2-minute curl test.

Read more
Your AI wrote a deep-merge endpoint. Here's what happens when you POST __proto__ to it.
Apr 28, 2026

Your AI wrote a deep-merge endpoint. Here's what happens when you POST __proto__ to it.

Read more