One rule, four headlines
In the last twelve months, four consumer apps built by small teams with modern stacks went from viral launch to data-breach headline. Different founders. Different data. Different tools. Same root cause: the backend default shipped to production unchanged.
| App | Stack | Default that shipped | Data exposed | Scale |
|---|---|---|---|---|
| Moltbook | Supabase | RLS disabled on all public tables | 1.5M agent records, 35K emails, every DM | Case study |
| Tea | Firebase Storage | allow read, write: if true | 13K government IDs, 1.1M messages, GPS | Case study |
| Cal AI | Firebase Firestore | allow read, write: if true | 3.2M health records, meal logs, a child's data | Case study |
| Quittr | Firebase Database | allow read, write: if true | 600K confessions, habits, 100K minors | Case study |
If one team ships an open database, that's an oversight. If four teams — including a Karpathy-endorsed AI platform, a dating safety app for women, a calorie tracker with 3.2 million users, and a $1M quit-porn app built in 10 days — ship the same bug, the bug isn't in the teams. It's in the default.
This article is about the shared anatomy. What do these four breaches have in common, why does the default keep surviving to production, and what's the one check that catches all of them?
The shared anatomy
Strip the press coverage away and every one of these breaches follows the same five-step sequence. No exceptions.
1. Small team picks a BaaS. Supabase or Firebase. The choice is downstream of what the AI scaffolder suggests or what the quickstart tutorial uses. In all four cases, the team was first-time founders or a solo builder.
2. The default is permissive. Firebase's test mode: allow read, write: if true. Supabase's quickstart: RLS disabled on public-schema tables, anon key in the bundle. Both defaults exist because they make development frictionless. Every read works. Every write succeeds. The app functions immediately.
3. The app ships. Nothing between development and production blocks the default. Firebase doesn't refuse to deploy if true rules. Supabase doesn't fail the build when RLS is disabled. The AI scaffolder doesn't check. The CI pipeline doesn't check. The default survives because nothing kills it.
4. The app succeeds. This is the part that makes the pattern cruel. Moltbook got the Karpathy endorsement. Tea reached hundreds of thousands of women. Cal AI crossed 3.2 million users. Quittr hit $1M in revenue in six months. Success obscures the default — because the app works, revenue grows, and the rules surface is invisible from the client.
5. A researcher finds the default. Every one of these breaches was discovered by a security researcher running the most basic possible check: can I read this data without authenticating? In every case, the answer was yes. No chained exploit. No zero-day. Just the default, unchanged, in production.
Two platforms, one failure mode
Moltbook is Supabase. The other three are Firebase. The specific defaults look different, but they fail for the same structural reason.
Firebase: if true
// Firebase test-mode default
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if true;
}
}
}
Selected during project creation. Lives in a separate Console tab. Only warning: a yellow banner you have to go looking for. No deployment blocker.
Firebase has three separate rule surfaces — Firestore, Realtime Database, and Storage — each with independent rules files. Tea's breach was Storage. Cal AI's was Firestore. Quittr's was the database. Securing one doesn't secure the others, and most builders don't even know all three exist.
Supabase: RLS disabled
-- What Moltbook's public tables looked like
SELECT schemaname, tablename, rowsecurity
FROM pg_tables WHERE schemaname = 'public';
-- rowsecurity = false on every table
-- Anon key in the JavaScript bundle = read/write access to everything
Supabase's anon key is explicitly designed to be exposed in the client bundle. The docs say it's safe. That's true — if and only if RLS is enabled on every table in the public schema. Without RLS, the anon key becomes a master key. The quickstart default until recently was RLS disabled.
The structural overlap
Both platforms share three properties that let the default survive:
| Property | Firebase | Supabase |
|---|---|---|
| Default is permissive | if true on test mode | RLS disabled in quickstart |
| Client can't distinguish | App works identically with open or locked rules | App works identically with or without RLS |
| No deployment blocker | firebase deploy ships test mode | supabase db push ships RLS-disabled tables |
The invisible-from-the-client property is the most important. When a builder is staring at their app and every read returns the correct data, there is no signal that the backend is also returning that data to anyone else. The happy path and the breach path are the same observation from the client.
What the data categories tell us
The four apps stored four different types of data, and the sensitivity escalation is worth looking at:
| App | Data category | Why it's worse than PII |
|---|---|---|
| Moltbook | API tokens, emails, private DMs | Users pasted OpenAI keys into DMs. Credentials-in-messages is an emerging breach category. |
| Tea | Government IDs, GPS, assault disclosures | Built to protect women. 4chan doxxed assault subjects. Physical safety risk. Class-action lawsuits. |
| Cal AI | Health records, children's data | HIPAA, COPPA, CCPA exposure. A child born in 2014 had weight and meal data breached. |
| Quittr | Personal confessions, habits, 100K minors | Sextortion and harassment material. Free-text "why I want to quit" tied to real ages. |
Each breach is progressively harder to recover from. Moltbook's users can rotate their API tokens. Cal AI's users can't un-breach their health history. Tea's users can't un-expose their home GPS coordinates. Quittr's users — 100,000 of them minors — can't un-share what they told a quit-porn app about their most private behavior.
The pattern is clear: vibe-coded apps are reaching into categories of data that BaaS defaults were never designed to protect, and nobody is checking whether the default matches the data.
Why the pattern keeps repeating
We've published four separate case studies now. The breach keeps happening because three incentives are aligned against the fix:
1. Tutorials teach the happy path. Firebase's getting-started guide has you select test mode because it makes the tutorial shorter. Supabase's quickstart works without RLS because adding policies would triple the tutorial length. AI scaffolders are trained on tutorials. The happy path is what they reproduce.
2. Speed is celebrated. "Built in 10 days." "Shipped in a weekend." "From idea to $1M in six months." The vibe-coder ecosystem rewards velocity, and security review is the thing that slows you down. Nobody tweets "I spent three hours writing Firestore rules before I deployed."
3. The feedback loop is broken. The client can't tell you the rules are wrong. The build system can't tell you. The deploy step can't tell you. The revenue can't tell you — Quittr was making $250K/month with an open database. There is no signal that anything is wrong until a researcher sends you a DM. And if you don't have a disclosure inbox, you might not see that DM for eight months.
The only thing that breaks this cycle is an explicit, deliberate check that someone runs against the default, because the default will never check itself.
The one check that catches all four
Every breach in this article would have been caught by the same test: try to read production data without authenticating.
For Firebase (Firestore / Realtime Database)
Open the Firebase Console. Go to Rules. Search for if true. If it appears at the root level, the database is open.
Then test it:
curl "https://firestore.googleapis.com/v1/projects/YOUR_PROJECT/databases/(default)/documents/users?pageSize=1"
If you get a document back without an auth header, the rules are wrong.
For Firebase Storage
Same Console, different tab. Storage → Rules. Same search for if true.
For Supabase
Paste this into your Supabase SQL editor:
SELECT schemaname, tablename, rowsecurity
FROM pg_tables WHERE schemaname = 'public';
Anything returning rowsecurity = false is reachable from any browser using the anon key in your bundle.
The fix (both platforms)
Firebase — deny by default:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /users/{userId} {
allow read, write: if request.auth != null
&& request.auth.uid == userId;
}
match /{document=**} {
allow read, write: if false;
}
}
}
Supabase — enable RLS + add a policy:
ALTER TABLE public.your_table ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users read own data"
ON public.your_table FOR SELECT
USING (auth.uid() = user_id);
Start from denial. Open what you've decided to expose. Never start from permissive and subtract.
What should change (and what won't)
It would be easy to close with "builders should be more careful." But after four case studies, that message doesn't survive contact with reality. Builders are doing what the tools encourage: shipping fast, iterating on feedback, and trusting the defaults.
What should change is the defaults themselves:
firebase deployshould refuse to deployif trueroot rules without an explicit override flag. Google publishes their own insecure-rules guidance. Enforcing it at deploy time is a one-line check.- Supabase should fail
db pushwhen any public-schema table has RLS disabled. Supabase already defaults to RLS-enabled on newly created projects. Extending that to the deploy step closes the loop. - AI code generators should surface the rules file. If Cursor, Lovable, Bolt, or Claude generates a Firebase
initializeApp()call, the rules file should appear in the same diff. Currently, it doesn't — and that's why these breaches exist.
Until those changes ship, the check is yours. A drill finds one bug on one app. A scanner walks the compounding chain across every route and every surface. That's the difference between checking once and checking continuously — and it's how a default that was set on project creation day gets caught before it becomes a headline.
Coverage of the four breaches in this article is based on public reporting by Wiz Research (Moltbook, January 2026), TechCrunch and NPR (Tea, July 2025), Cybernews and SC Media (Cal AI, March 2026), and 404 Media (Quittr, March 2026). Exposure figures are as documented in those reports. Our individual case studies are linked in the table above.