• Agents
  • Docs
  • Pricing
  • Blog
Log in
Get started

Security for apps built with AI. Paste a URL, get a report, fix what matters.

Product

  • How it works
  • What we find
  • Pricing
  • Agents
  • MCP Server
  • CLI
  • GitHub Action

Resources

  • Blog
  • Docs
  • FAQ
  • Glossary

Security

  • Supabase Security
  • Next.js Security
  • Lovable Security
  • Cursor Security
  • Bolt Security

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Imprint
© 2026 Flowpatrol. All rights reserved.
Back to Blog
Case Study

The Moltbook Breach: 1.5 Million API Tokens Exposed Because RLS Was Off

How an AI agent social network built with vibe coding left its entire Supabase database wide open. A deep technical breakdown of the breach, what went wrong, and what every builder can learn from it.

Flowpatrol TeamMar 29, 20268 min read
The Moltbook Breach: 1.5 Million API Tokens Exposed Because RLS Was Off

The app that launched a security wake-up call

In January 2026, Moltbook went viral. Billed as "the front page of the agent internet," it was a social network designed exclusively for AI agents — a place where bots could post, message each other, and build karma. The concept was wild, the growth was real, and within days it had 1.5 million registered AI agents controlled by roughly 17,000 human users.

Then Wiz researchers took a look at the source code.

What they found was one of the most significant security incidents in the short history of vibe coding: the entire production database was accessible to anyone with a web browser. No authentication. No access controls. Full read and write access to every table.

This is the story of what happened, why it happened, and what it means for anyone shipping apps built with AI.


What was exposed

The numbers are stark:

DataCount
API authentication tokens1,500,000
Email addresses35,000
Private messages between agentsAll of them
OpenAI/Anthropic API keys (shared in DMs)Unknown

An attacker with this access could impersonate any agent, read any private message, harvest API keys that users had shared in conversations, modify or delete any record in the database, or simply dump everything with a single query.

The fix? Two lines of SQL. That's what stood between a functional app and a secure one.


How the breach worked

Moltbook was built on Supabase — a popular backend-as-a-service platform that provides a PostgreSQL database, authentication, and real-time subscriptions. When configured properly, Supabase is secure. The key word is "configured."

Here's exactly what went wrong:

Step 1: The Supabase credentials were in the JavaScript bundle.

Every Supabase project has two values: a project URL and an "anon key." These are designed to be used in client-side code — Supabase's docs say the anon key is safe to expose publicly. But that safety depends entirely on Row Level Security being enabled.

// This was visible in the page source
const supabaseUrl = "https://xxx.supabase.co";
const supabaseKey = "eyJhbGciOiJIUzI1...";

Step 2: Row Level Security was disabled on every table.

RLS is Supabase's access control mechanism. When enabled, it ensures that database queries are filtered by the authenticated user — you can only see and modify your own data. When disabled, the anon key becomes a master key to everything.

-- This is all it would have taken
ALTER TABLE agents ENABLE ROW LEVEL SECURITY;

CREATE POLICY "Users can only view own agents"
ON agents FOR SELECT
USING (auth.uid() = owner_id);

These two statements were missing. On every table. (For a full walkthrough on how RLS works and how to configure it correctly, see our Supabase RLS guide.)

Comparison of a database table with RLS disabled versus enabled — all rows visible on the left, only the user's own row visible on the right

Step 3: Anyone could query the database directly.

With the URL and anon key from the page source, an attacker could use the Supabase client library (or plain REST calls) to query any table:

import { createClient } from "@supabase/supabase-js";

const supabase = createClient(url, anonKey);

// This returns ALL agents — all 1.5 million
const { data } = await supabase.from("agents").select("*");

// This returns ALL private messages
const { data: messages } = await supabase
  .from("messages")
  .select("*");

No login needed. No tokens to guess. Just copy two strings from the page source.


The vibe coding connection

Moltbook's founder publicly stated: "I didn't write a single line of code for @moltbook. I just had a vision for the technical architecture, and AI made it a reality."

This is the purest expression of vibe coding: describe what you want, let AI generate the code, ship it. The approach delivered a functional product in record time. But "functional" and "secure" are different things.

Here's the gap: AI coding assistants generate code that works. They create tables, write queries, build UIs, handle routing. What they don't automatically do is configure security policies. They don't enable RLS unless you ask. They don't write access control policies unless you describe them. They don't run threat models or consider what happens when someone copies your anon key from the page source.

This isn't a theoretical risk. It's a pattern:

  • Moltbook (January 2026): Zero RLS on any table. 1.5M records exposed.
  • Lovable Platform (CVE-2025-48757): AI generated 170+ apps without RLS. 303 vulnerable endpoints.
  • Firebase Mass Misconfiguration (2024-2025): 900+ sites with test-mode security rules. 125M records exposed.

The common thread: the AI built the feature, but nobody (human or AI) configured the security.


The timeline

The response was fast — once the vulnerability was discovered:

TimeEvent
Jan 31, 21:48 UTCWiz researchers discover the vulnerability, contact Moltbook
Jan 31, 23:29 UTCFirst fix: agents, owners, site_admins tables secured
Feb 1, 00:44 UTCWrite access blocked on remaining tables
Feb 2404 Media publishes their investigation
Feb 2Andrej Karpathy and Gary Marcus issue public warnings

Credit where it's due: once notified, Moltbook responded within two hours. The problem wasn't response time — it was that the vulnerability existed at all.


What the experts said

Andrej Karpathy, OpenAI founding member, had initially described Moltbook as "the most incredible sci-fi takeoff-adjacent thing I've seen recently." After the breach, he warned: "You are putting your computer and private data at a high risk" and urged people not to run agent systems casually.

Gary Marcus called the platform "a disaster waiting to happen" and highlighted the systemic risks of AI-generated code without proper oversight.

The broader reaction across tech communities was a turning point: vibe coding security became a mainstream conversation overnight.

Three stages — Build, Ship, Breach — connected by arrows, with the final stage highlighted in red with a warning icon


What builders should take away

If you're shipping an app built with AI — whether it's a side project, an MVP, or a product you're charging for — here are the concrete lessons:

1. Always enable RLS when using Supabase

This is non-negotiable. If your app uses Supabase with client-side access, RLS must be enabled on every table.

-- Run this for EVERY table in your project
ALTER TABLE your_table ENABLE ROW LEVEL SECURITY;

-- Then create policies that match your access model
CREATE POLICY "Users access own data"
ON your_table FOR ALL
USING (auth.uid() = user_id);

You can check your current RLS status in seconds:

SELECT schemaname, tablename, rowsecurity
FROM pg_tables
WHERE schemaname = 'public';

If any row shows rowsecurity = false, fix it now.

2. Review what your AI generated

AI-generated code is a first draft, not a final product. Before deploying, review the security-critical parts:

  • Database access patterns (is there RLS? Are there policies?)
  • Authentication flows (is auth actually enforced?)
  • API endpoints (do they check permissions?)
  • Client-side code (are there secrets that shouldn't be there?)

3. Test from the attacker's perspective

Open your browser, view your page source, and look for API keys. Then try to use those keys to access data you shouldn't be able to see. If you can, an attacker can too.

4. Don't rely on a single security layer

Supabase RLS is your last line of defense, not your only one. A properly secured app has multiple layers:

  • Server-side API routes that validate requests
  • Supabase Auth for user authentication
  • RLS policies for row-level data access
  • Security headers (CSP, HSTS) for transport security

How we would have caught this

This is exactly the kind of vulnerability Flowpatrol is built to detect. Our scanner:

  1. Analyzes client-side JavaScript for exposed Supabase and Firebase credentials
  2. Tests RLS enforcement by attempting unauthenticated data access
  3. Checks for cross-user data access to verify row-level isolation
  4. Validates authentication flows to ensure endpoints actually require login

The Moltbook breach was preventable with a five-minute scan. That's the point — security doesn't have to be slow or expensive. It just has to happen before someone else finds the problem first.


The Moltbook breach is documented in detail in public reporting by Wiz, 404 Media, and Infosecurity Magazine.

Back to all posts

More in Case Study

The Lovable RLS Vulnerability: How One AI Platform Shipped the Same Security Flaw Across 170+ Apps
Mar 28, 2026

The Lovable RLS Vulnerability: How One AI Platform Shipped the Same Security Flaw Across 170+ Apps

Read more