The Firebase Misconfiguration Epidemic: 900+ Sites, 125 Million Records, and 19 Million Plaintext Passwords
This wasn't one breach — it was a pattern. Researchers scanned 5 million domains and found over 900 Firebase projects with wide-open security rules. Here's what happened, why it keeps happening, and how to check your own project in 30 seconds.
This isn't a breach. It's an epidemic.
Most security stories start with one company. One hack. One bad day. This one starts with 916 of them.
In 2024, three security researchers — Logykk, xyzeva (Eva), and MrBruh — decided to run a simple experiment. They scanned over five million domains looking for Firebase projects with misconfigured security rules. What they found was staggering: 916 websites had their entire databases exposed to the public internet, no authentication required.
The total damage: approximately 125 million user records, including 19 million plaintext passwords, 106 million email addresses, 85 million names, 34 million phone numbers, and 27 million billing records complete with bank account details.
All of it accessible to anyone who knew where to look — and looking was trivially easy.
How Firebase security rules work (and how they fail)
Firebase is one of the most popular Backend-as-a-Service platforms in the world. Acquired by Google in 2014, it powers millions of apps. It handles authentication, real-time databases, hosting, and cloud functions. For builders shipping quickly, it's a natural choice.
But Firebase has a design decision that matters enormously: security rules are opt-in, not opt-out.
When you create a new Firebase project, the console offers you a choice. You can start in "locked mode" (no reads or writes allowed) or "test mode" (everything open). Test mode looks like this:
{
"rules": {
".read": true,
".write": true
}
}
Two lines. That's all it takes to make your entire database readable and writable by anyone on the internet. No login. No token. Just a GET request to a predictable URL.
For Firestore (the newer database product), the equivalent looks like this:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if true;
}
}
}
During development, test mode makes everything work smoothly. No auth errors. No permission denied messages. No friction. The problem is what happens next — or rather, what doesn't happen next. The rules never get changed. The app ships to production. And the database sits wide open.
What the researchers found
The scale of exposed data is worth sitting with for a moment:
| Data Type | Records Exposed |
|---|---|
| Total records | ~125 million |
| Email addresses | 106 million |
| Full names | 85 million |
| Phone numbers | 34 million |
| Billing details (with bank accounts) | 27 million |
| Plaintext passwords | 19 million |
This wasn't theoretical. These were real databases with real user data, sitting open on the internet.
Some of the more notable cases:
Silid LMS — a learning management system with 27 million user records exposed. Student data, course information, personal details, all publicly accessible.
Lead Carrot — a sales and cold-calling platform with 22 million user details. Names, email addresses, phone numbers — exactly the kind of data you'd want to keep private.
MyChefTool — a restaurant point-of-sale system that exposed 14 million names and 13 million email addresses. Customer data from thousands of restaurants.
An online gambling network spanning 9 sites, which exposed 8 million bank account details. That's financial data from users who had every reason to expect confidentiality.
And then there was Chattr.
The Chattr incident: when fast food meets open databases
The investigation started because of Chattr, an AI-powered hiring system used by some of the biggest fast food chains in the United States. Applebee's. Chick-fil-A. KFC. Subway. Taco Bell. Wendy's. All of them potentially affected.
The Chattr vulnerability was particularly striking. Researchers found they could gain full database privileges simply by registering a new user. No exploit required. No SQL injection. Just standard Firebase authentication, combined with security rules that gave every authenticated user full access to everything.
That meant job applicant data — names, addresses, Social Security numbers, interview records — was accessible to anyone who took thirty seconds to create an account.
This discovery is what prompted the researchers to ask a bigger question: how many other Firebase projects had the same problem?
The answer was 916. At minimum.
Why this keeps happening: the test mode trap
Here's the development cycle that produces this vulnerability over and over:
-
Start a new project. Firebase Console offers "Test Mode" for quick setup. You click it because you're building, not configuring security policies.
-
Build your app. Everything works. No errors. No warnings. The data flows smoothly between client and server.
-
Deploy to production. The rules don't change. Why would they? Nothing is broken.
-
Database sits open. No alerts. No monitoring. No indication that your entire dataset is publicly available.
-
Someone finds it. Usually a researcher. Sometimes not.
The fundamental issue is friction — or the lack of it. Test mode has zero friction. Secure rules require understanding your data model, writing rule expressions, and testing edge cases. When you're shipping fast, that work gets deferred. And "deferred" often means "never."
Firebase does warn you in the console when test mode rules are active. But if you deployed via the CLI and never went back to check, you'd never see the warning. There's no email. No blocking deployment check. No "are you sure you want to push these open rules to production?"
The notification campaign (and why 76% didn't respond)
After documenting the findings, the researchers did something commendable: they tried to help. They contacted 842 of the affected site owners directly.
The results were discouraging:
| Metric | Result |
|---|---|
| Notifications sent | 842 |
| Successfully delivered | 85% |
| Bounced | 9% |
| Fixed the misconfiguration | 24% |
| Responded at all | 1% |
| Offered a bug bounty | 0.2% (2 sites) |
Read that again. Only 24% of notified sites fixed the problem. Only 1% even replied. That means more than 600 sites, after being explicitly told their databases were exposed, did nothing.
Why? A few reasons surface:
Lack of expertise. Many developers using Firebase for quick projects don't deeply understand the security model. Getting an email about "misconfigured security rules" may not register as urgent if you don't know what that means in practice.
No incident response process. These are often small teams or solo builders. There's no security team. No playbook. No one whose job it is to respond to vulnerability reports.
Assumed security. A significant number of developers believe that because Firebase is a Google product, it's secure by default. It isn't. Google provides the tools, but configuration is your responsibility.
Technical debt. Even for teams that understood the problem, fixing security rules isn't just flipping a switch. You need to understand every collection, every document structure, every access pattern. For a complex app, that's real work.
The attack is embarrassingly simple
Here's what it takes to check if a Firebase Realtime Database is exposed:
curl https://[PROJECT_ID].firebaseio.com/.json
That's it. One command. If you get data back instead of a 401 or 403 error, the database is open. An attacker can enumerate collections, download everything, or even write data:
# Download the entire database
curl https://[PROJECT_ID].firebaseio.com/.json
# Target specific collections
curl https://[PROJECT_ID].firebaseio.com/users.json
curl https://[PROJECT_ID].firebaseio.com/orders.json
curl https://[PROJECT_ID].firebaseio.com/payments.json
Finding the project ID is equally trivial. It's in the page source. Every Firebase app initializes with a config object that contains the project ID, API key, and auth domain — all visible in the JavaScript bundle. That's by design. Firebase API keys are meant to be public. But their safety depends entirely on security rules being properly configured.
The Supabase parallel: same pattern, different platform
If this sounds familiar, it should. In January 2026, the Moltbook breach exposed 1.5 million records from a Supabase-powered app where Row Level Security was disabled on every table. Different platform, identical pattern.
Firebase has Security Rules. Supabase has RLS policies. Both are opt-in. Both default to open during development. Both depend on the developer remembering to configure them before shipping to production.
The comparison:
| Firebase | Supabase | |
|---|---|---|
| Security mechanism | Security Rules | Row Level Security (RLS) |
| Default state | Test mode = open | RLS disabled = open |
| Configuration | JSON/expression rules | SQL policies |
| Client credentials | API key in page source | Anon key in page source |
| Key is safe to expose? | Yes, if rules are set | Yes, if RLS is enabled |
| If misconfigured | Full database access | Full database access |
The lesson is platform-agnostic: any Backend-as-a-Service that puts security configuration in the developer's hands will have this problem. The question is only how many projects ship without it.
How to check and fix your Firebase rules right now
If you're running a Firebase project, here's how to check your security posture in under a minute.
Step 1: Check your current rules
Using the Firebase Console:
Go to Firebase Console > Your Project > Realtime Database > Rules (or Firestore > Rules). If you see .read: true or allow read, write: if true, you have a problem.
Using the Firebase CLI:
# Check Realtime Database rules
firebase database:rules:list
# Check Firestore rules
firebase firestore:rules:list
Quick external check (for your own projects only):
# Realtime Database
curl https://YOUR-PROJECT.firebaseio.com/.json
# If you get data back, it's open
# Firestore (REST API)
curl "https://firestore.googleapis.com/v1/projects/YOUR-PROJECT/databases/(default)/documents/COLLECTION"
# If you get documents without auth, it's open
Step 2: Replace test rules with real security
Realtime Database — lock it down:
{
"rules": {
"users": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
},
"public_content": {
".read": true,
".write": "auth != null && root.child('admins').child(auth.uid).exists()"
},
"$other": {
".read": false,
".write": false
}
}
}
Firestore — lock it down:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
// Users can only access their own data
match /users/{userId} {
allow read, write: if request.auth != null
&& request.auth.uid == userId;
}
// Public content: anyone can read, only admins write
match /public/{document} {
allow read: if true;
allow write: if request.auth != null
&& get(/databases/$(database)/documents/admins/$(request.auth.uid)).data.role == "admin";
}
// Deny everything else by default
match /{document=**} {
allow read, write: if false;
}
}
}
Step 3: Test your rules before deploying
Firebase provides an emulator suite specifically for this:
const { initializeTestApp, assertFails, assertSucceeds } = require('@firebase/rules-unit-testing');
describe('Security rules', () => {
it('denies unauthenticated access to user data', async () => {
const db = initializeTestApp({ projectId: 'test' }).firestore();
const userDoc = db.collection('users').doc('user123');
await assertFails(userDoc.get());
});
it('allows users to read their own data', async () => {
const db = initializeTestApp({
projectId: 'test',
auth: { uid: 'user123' }
}).firestore();
const userDoc = db.collection('users').doc('user123');
await assertSucceeds(userDoc.get());
});
it('denies users from reading other users data', async () => {
const db = initializeTestApp({
projectId: 'test',
auth: { uid: 'user123' }
}).firestore();
const otherDoc = db.collection('users').doc('user456');
await assertFails(otherDoc.get());
});
});
Step 4: Add rules to your CI/CD pipeline
Don't let insecure rules reach production:
# In your CI config
- name: Test Firebase rules
run: |
firebase emulators:exec --only firestore "npm test"
Step 5: Monitor for unauthorized access
Enable Firebase audit logging. Set up alerts for unusual read patterns. If your database suddenly has traffic from IPs you don't recognize, you want to know immediately — not six months later when a researcher emails you.
The uncomfortable math
Let's zoom out. The researchers scanned 5 million domains. They found 916 with exposed Firebase databases. That's a hit rate of about 0.018%.
Sounds small? Firebase powers millions of apps. If even a fraction of a percent are misconfigured, the total number of exposed databases worldwide is measured in thousands. And these researchers only checked for the most basic misconfiguration — completely open rules. They didn't test for more nuanced problems like overly permissive rules, missing field-level security, or rules that are correct for reads but wrong for writes.
The 916 sites they found are the tip of the iceberg. The real number is almost certainly larger.
And then there's the human cost. 125 million records isn't an abstract number. Those are real people whose names, emails, passwords, bank accounts, and personal data were sitting on the open internet. The 19 million users with plaintext passwords are especially vulnerable — because people reuse passwords, a single exposed credential can cascade across every other account that shares it.
What builders should take away
This isn't about blaming anyone. Firebase is a powerful platform. The developers who shipped these apps were building real products that solve real problems. The gap is in the tooling — the space between "test mode works" and "production mode is secure."
Here's what matters:
1. Treat security rules like production code. They're not config. They're not settings. They're the access control layer for your entire database. Review them, test them, version them.
2. Never deploy test mode rules. Add a check to your deployment process. If your rules contain .read: true or allow read, write: if true at the root level, the deploy should fail.
3. Use the principle of least privilege. Start with allow read, write: if false and open access only for specific paths, specific users, and specific operations. If a rule doesn't have a clear reason for being permissive, it shouldn't be.
4. Understand what "public" means in BaaS. Your Firebase API key is in your page source. Your Supabase anon key is in your page source. That's fine — it's designed that way. But it means your security rules are the only thing between your data and the world. If they're misconfigured, there's no other layer to save you.
5. Test from outside. Open an incognito window. Try to access your own database without logging in. Try with a different user account. If you can see data that should be private, fix it before someone else finds out.
How Flowpatrol catches this
This class of vulnerability is exactly what Flowpatrol is built to detect. Our scanner checks for exposed BaaS credentials in client-side code, tests whether those credentials grant unauthorized access to data, and reports exactly what's exposed — before someone else finds it.
The 916 sites in this study were found by three researchers doing manual work. A five-minute scan catches the same pattern automatically.
This case study is based on public reporting by SecurityWeek, The Register, GitGuardian, and XEye Security.