• Agents
  • Pricing
  • Blog
Log in
Get started

Security for apps built with AI. Paste a URL, get a report, fix what matters.

Product

  • How it works
  • What we find
  • Pricing
  • Agents
  • MCP Server
  • CLI
  • GitHub Action

Resources

  • Guides
  • Blog
  • Docs
  • OWASP Top 10
  • Glossary
  • FAQ

Security

  • Supabase Security
  • Next.js Security
  • Lovable Security
  • Cursor Security
  • Bolt Security

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Imprint
© 2026 Flowpatrol. All rights reserved.
Back to Blog

Apr 16, 2026 · 10 min read

Same default, four breaches: what Moltbook, Tea, Cal AI, and Quittr all shipped to production

Four of the biggest vibe-coded consumer apps of the last year shipped with the same root cause: the BaaS default that said yes to everyone. One was Supabase. Three were Firebase. All four made the news. Here's the pattern, the shared anatomy, and the one check that catches all of them.

FFlowpatrol Team·Explainer
Same default, four breaches: what Moltbook, Tea, Cal AI, and Quittr all shipped to production

One rule, four headlines

In the last twelve months, four consumer apps built by small teams with modern stacks went from viral launch to data-breach headline. Different founders. Different data. Different tools. Same root cause: the backend default shipped to production unchanged.

AppStackDefault that shippedData exposedScale
MoltbookSupabaseRLS disabled on all public tables1.5M agent records, 35K emails, every DMCase study
TeaFirebase Storageallow read, write: if true13K government IDs, 1.1M messages, GPSCase study
Cal AIFirebase Firestoreallow read, write: if true3.2M health records, meal logs, a child's dataCase study
QuittrFirebase Databaseallow read, write: if true600K confessions, habits, 100K minorsCase study

If one team ships an open database, that's an oversight. If four teams — including a Karpathy-endorsed AI platform, a dating safety app for women, a calorie tracker with 3.2 million users, and a $1M quit-porn app built in 10 days — ship the same bug, the bug isn't in the teams. It's in the default.

This article is about the shared anatomy. What do these four breaches have in common, why does the default keep surviving to production, and what's the one check that catches all of them?

Four app panels — Moltbook, Tea, Cal AI, Quittr — each with their data category and the same root cause rule, converging on a shared center: 'the default shipped to production'
Four app panels — Moltbook, Tea, Cal AI, Quittr — each with their data category and the same root cause rule, converging on a shared center: 'the default shipped to production'


The shared anatomy

Strip the press coverage away and every one of these breaches follows the same five-step sequence. No exceptions.

1. Small team picks a BaaS. Supabase or Firebase. The choice is downstream of what the AI scaffolder suggests or what the quickstart tutorial uses. In all four cases, the team was first-time founders or a solo builder.

2. The default is permissive. Firebase's test mode: allow read, write: if true. Supabase's quickstart: RLS disabled on public-schema tables, anon key in the bundle. Both defaults exist because they make development frictionless. Every read works. Every write succeeds. The app functions immediately.

3. The app ships. Nothing between development and production blocks the default. Firebase doesn't refuse to deploy if true rules. Supabase doesn't fail the build when RLS is disabled. The AI scaffolder doesn't check. The CI pipeline doesn't check. The default survives because nothing kills it.

4. The app succeeds. This is the part that makes the pattern cruel. Moltbook got the Karpathy endorsement. Tea reached hundreds of thousands of women. Cal AI crossed 3.2 million users. Quittr hit $1M in revenue in six months. Success obscures the default — because the app works, revenue grows, and the rules surface is invisible from the client.

5. A researcher finds the default. Every one of these breaches was discovered by a security researcher running the most basic possible check: can I read this data without authenticating? In every case, the answer was yes. No chained exploit. No zero-day. Just the default, unchanged, in production.

Note

This sequence is not specific to these four apps. The same pattern has been documented across 916 Firebase projects exposing 125 million records. The four named apps are notable because they're the ones that made the news — not because they're the only ones running the default.


Two platforms, one failure mode

Moltbook is Supabase. The other three are Firebase. The specific defaults look different, but they fail for the same structural reason.

Firebase: if true

// Firebase test-mode default
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if true;
    }
  }
}

Selected during project creation. Lives in a separate Console tab. Only warning: a yellow banner you have to go looking for. No deployment blocker.

Firebase has three separate rule surfaces — Firestore, Realtime Database, and Storage — each with independent rules files. Tea's breach was Storage. Cal AI's was Firestore. Quittr's was the database. Securing one doesn't secure the others, and most builders don't even know all three exist.

Supabase: RLS disabled

-- What Moltbook's public tables looked like
SELECT schemaname, tablename, rowsecurity
FROM pg_tables WHERE schemaname = 'public';

-- rowsecurity = false on every table
-- Anon key in the JavaScript bundle = read/write access to everything

Supabase's anon key is explicitly designed to be exposed in the client bundle. The docs say it's safe. That's true — if and only if RLS is enabled on every table in the public schema. Without RLS, the anon key becomes a master key. The quickstart default until recently was RLS disabled.

The structural overlap

Both platforms share three properties that let the default survive:

PropertyFirebaseSupabase
Default is permissiveif true on test modeRLS disabled in quickstart
Client can't distinguishApp works identically with open or locked rulesApp works identically with or without RLS
No deployment blockerfirebase deploy ships test modesupabase db push ships RLS-disabled tables

The invisible-from-the-client property is the most important. When a builder is staring at their app and every read returns the correct data, there is no signal that the backend is also returning that data to anyone else. The happy path and the breach path are the same observation from the client.


What the data categories tell us

The four apps stored four different types of data, and the sensitivity escalation is worth looking at:

AppData categoryWhy it's worse than PII
MoltbookAPI tokens, emails, private DMsUsers pasted OpenAI keys into DMs. Credentials-in-messages is an emerging breach category.
TeaGovernment IDs, GPS, assault disclosuresBuilt to protect women. 4chan doxxed assault subjects. Physical safety risk. Class-action lawsuits.
Cal AIHealth records, children's dataHIPAA, COPPA, CCPA exposure. A child born in 2014 had weight and meal data breached.
QuittrPersonal confessions, habits, 100K minorsSextortion and harassment material. Free-text "why I want to quit" tied to real ages.

Each breach is progressively harder to recover from. Moltbook's users can rotate their API tokens. Cal AI's users can't un-breach their health history. Tea's users can't un-expose their home GPS coordinates. Quittr's users — 100,000 of them minors — can't un-share what they told a quit-porn app about their most private behavior.

The pattern is clear: vibe-coded apps are reaching into categories of data that BaaS defaults were never designed to protect, and nobody is checking whether the default matches the data.

Heads up

If your app handles health data, identity documents, location, financial records, or any form of sensitive personal disclosure, the standard BaaS default is not sufficient. The default was designed for a tutorial. Your data requires rules that were designed for your data.


Why the pattern keeps repeating

We've published four separate case studies now. The breach keeps happening because three incentives are aligned against the fix:

1. Tutorials teach the happy path. Firebase's getting-started guide has you select test mode because it makes the tutorial shorter. Supabase's quickstart works without RLS because adding policies would triple the tutorial length. AI scaffolders are trained on tutorials. The happy path is what they reproduce.

2. Speed is celebrated. "Built in 10 days." "Shipped in a weekend." "From idea to $1M in six months." The vibe-coder ecosystem rewards velocity, and security review is the thing that slows you down. Nobody tweets "I spent three hours writing Firestore rules before I deployed."

3. The feedback loop is broken. The client can't tell you the rules are wrong. The build system can't tell you. The deploy step can't tell you. The revenue can't tell you — Quittr was making $250K/month with an open database. There is no signal that anything is wrong until a researcher sends you a DM. And if you don't have a disclosure inbox, you might not see that DM for eight months.

The only thing that breaks this cycle is an explicit, deliberate check that someone runs against the default, because the default will never check itself.


The one check that catches all four

Every breach in this article would have been caught by the same test: try to read production data without authenticating.

For Firebase (Firestore / Realtime Database)

Open the Firebase Console. Go to Rules. Search for if true. If it appears at the root level, the database is open.

Then test it:

curl "https://firestore.googleapis.com/v1/projects/YOUR_PROJECT/databases/(default)/documents/users?pageSize=1"

If you get a document back without an auth header, the rules are wrong.

For Firebase Storage

Same Console, different tab. Storage → Rules. Same search for if true.

For Supabase

Paste this into your Supabase SQL editor:

SELECT schemaname, tablename, rowsecurity
FROM pg_tables WHERE schemaname = 'public';

Anything returning rowsecurity = false is reachable from any browser using the anon key in your bundle.

The fix (both platforms)

Firebase — deny by default:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /users/{userId} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;
    }
    match /{document=**} {
      allow read, write: if false;
    }
  }
}

Supabase — enable RLS + add a policy:

ALTER TABLE public.your_table ENABLE ROW LEVEL SECURITY;

CREATE POLICY "Users read own data"
  ON public.your_table FOR SELECT
  USING (auth.uid() = user_id);

Start from denial. Open what you've decided to expose. Never start from permissive and subtract.


What should change (and what won't)

It would be easy to close with "builders should be more careful." But after four case studies, that message doesn't survive contact with reality. Builders are doing what the tools encourage: shipping fast, iterating on feedback, and trusting the defaults.

What should change is the defaults themselves:

  • firebase deploy should refuse to deploy if true root rules without an explicit override flag. Google publishes their own insecure-rules guidance. Enforcing it at deploy time is a one-line check.
  • Supabase should fail db push when any public-schema table has RLS disabled. Supabase already defaults to RLS-enabled on newly created projects. Extending that to the deploy step closes the loop.
  • AI code generators should surface the rules file. If Cursor, Lovable, Bolt, or Claude generates a Firebase initializeApp() call, the rules file should appear in the same diff. Currently, it doesn't — and that's why these breaches exist.

Until those changes ship, the check is yours. A drill finds one bug on one app. A scanner walks the compounding chain across every route and every surface. That's the difference between checking once and checking continuously — and it's how a default that was set on project creation day gets caught before it becomes a headline.


Coverage of the four breaches in this article is based on public reporting by Wiz Research (Moltbook, January 2026), TechCrunch and NPR (Tea, July 2025), Cybernews and SC Media (Cal AI, March 2026), and 404 Media (Quittr, March 2026). Exposure figures are as documented in those reports. Our individual case studies are linked in the table above.

Back to all posts