One flaw. 170+ apps. Same bug in every single one.
In April 2025, a security researcher opened Lovable's own featured-apps showcase — the page where Lovable shows off the best things built on its platform — and started poking around. Forty-seven minutes later, he had pulled real user data from multiple featured apps. No exploits. No credentials. Two strings from each app's page source and a few lines of JavaScript.
The apps weren't individually misconfigured. They were all generated by the same AI, using the same template, with the same missing line: ALTER TABLE users ENABLE ROW LEVEL SECURITY. Every app Lovable shipped had Row Level Security switched off at the database layer. The Supabase anon key — which is supposed to be safe to expose publicly, when RLS is on — was sitting in the page source of every one of them.
Researchers documented 303 vulnerable endpoints across 170+ confirmed apps. Personal debt amounts. Home addresses. Third-party API keys. Email/password combos in plain text. This wasn't a bug. It was a production line.
Here's what happened, why it keeps happening, and how to check if your own app is on the list — in 60 seconds.
Check your own app in 60 seconds
Before we get into the story, do this. If you built anything on Lovable, Bolt, Cursor, or v0 that uses Supabase, open your app in a browser and run this in the devtools console:
// Paste into the console on your deployed app
const url = Object.keys(window).map(k => window[k]).find(v => v?.supabaseUrl)?.supabaseUrl
|| document.documentElement.innerHTML.match(/https:\/\/\w+\.supabase\.co/)?.[0];
const key = document.documentElement.innerHTML.match(/eyJ[\w-]+\.[\w-]+\.[\w-]+/)?.[0];
console.log({ url, key });
Got a URL and a key back? Now try to read a table you shouldn't be able to read:
curl "YOUR_SUPABASE_URL/rest/v1/users?select=*&limit=5" \
-H "apikey: YOUR_ANON_KEY" \
-H "Authorization: Bearer YOUR_ANON_KEY"
If that returns real user data, you have the exact bug this article is about. Keep reading — the fix is five lines of SQL and we show you every one of them.
What Lovable does (and why the stakes are high)
Lovable launched in 2024 as a generative AI platform that creates full-stack web applications from text prompts. You describe what you want — "build me a debt tracking app" — and Lovable generates a working React frontend backed by Supabase for database, auth, and real-time features. No code required.
The promise is real. Thousands of builders have shipped products with Lovable that would have taken weeks with traditional development. By March 2025, Lovable was one of the fastest-growing platforms in the vibe coding space — alongside Bolt, Base44, and Replit. Real users. Real data. Real stakes.
There's a gap between "working" and "secure." Lovable's AI fell straight through it.
How the vulnerability worked
The issue is straightforward — and that's precisely what makes it so serious.
Every Lovable app uses Supabase as its backend. Supabase gives you a PostgreSQL database with a built-in security mechanism called Row Level Security (RLS). When RLS is on and configured with policies, the database itself enforces who can read and write which rows. Elegant and effective — when it's switched on.
Lovable's AI never switched it on.
Step 1: The credentials were in the page source.
Every Supabase project requires a URL and an "anon key" in the client-side JavaScript. This is by design — Supabase's documentation says the anon key is safe to expose publicly. But that safety assumption depends entirely on RLS being enabled.
// Visible in any Lovable app's JavaScript bundle
const supabaseUrl = "https://yourproject.supabase.co";
const supabaseKey = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...";
Step 2: RLS was disabled on every table.
Lovable's AI generated database schemas with tables for users, financial data, transactions, messages — whatever the app needed. It created the tables. It wrote the queries. It built the UI. It did not enable Row Level Security. It did not create any security policies.
-- What Lovable generated:
CREATE TABLE users (
id UUID PRIMARY KEY,
email TEXT,
name TEXT,
created_at TIMESTAMP
);
CREATE TABLE debts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
amount DECIMAL,
description TEXT
);
-- What Lovable did NOT generate:
-- ALTER TABLE users ENABLE ROW LEVEL SECURITY;
-- ALTER TABLE debts ENABLE ROW LEVEL SECURITY;
-- CREATE POLICY ... (any policy at all)
Step 3: Anyone could query everything.
Without RLS, the Supabase anon key becomes an all-access pass. An attacker (or a curious researcher) could extract the credentials from the page source and query any table directly:
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(url, anonKey);
// Returns ALL users — every email, every name
const { data: users } = await supabase.from("users").select("*");
// Returns ALL financial records
const { data: debts } = await supabase.from("debts").select("*");
// Full write access too — could modify or delete anything
await supabase.from("users").delete().neq("id", "");
No authentication. No token to steal. No clever exploit. Just two strings from the page source and a few lines of JavaScript.
Normally the login screen is where access control happens. In a Lovable app, the login screen was theater. The real access check was supposed to happen inside Postgres — and nothing was listening.
The scale of the problem
A single misconfigured app is a bug. This was a production line.
| Metric | Value |
|---|---|
| Vulnerable endpoints discovered | 303 across tested apps |
| Confirmed affected applications | 170+ |
| Time to exploit a single app | 47 minutes (documented audit) |
| Data types exposed | Debt amounts, home addresses, API keys, user credentials |
| Authentication required | None |
| CVSS severity | Critical (9.1+) |
One of those apps alone exposed 13,000 user records — personal debt amounts, payment histories, home addresses. These weren't obscure side projects either. They were the apps Lovable was actively promoting in its public showcase.
The exposed data catalog included debt amounts and payment histories, physical home addresses, third-party API keys stored in database tables, private user conversations, and email/password combinations — plaintext, readable with the anon key sitting in every page's source.
Every builder who shipped one of these apps trusted Lovable to handle the backend correctly. The platform generated working code. It did not generate secure code.
The VibeScamming finding
RLS wasn't the only issue surfaced that spring. In April 2025, Guardio Labs published separate research testing how well AI coding platforms resist being prompted to build malicious apps. They called the pattern "VibeScamming." A score of 10 means maximum resistance; 1 means almost none.
| Platform | VibeScamming Score (Guardio Labs, April 2025) |
|---|---|
| ChatGPT | 8.0 / 10 |
| Claude | 4.3 / 10 |
| Lovable | 1.8 / 10 |
Guardio's write-up: "From pixel-perfect scam pages to live hosting, evasion techniques, and even admin dashboards to track stolen data — Lovable didn't just participate, it performed. No guardrails, no hesitation."
Credential harvesting pages. Phishing UIs. Exfiltration dashboards, hosted end-to-end. The RLS failure and the VibeScamming score are separate findings — but together they tell the same story. Lovable was optimized to ship fast. Security was never part of the brief.
The timeline
| Date | Event |
|---|---|
| March 20, 2025 | Security researcher Matt Palmer publicly confirms RLS misconfiguration in Lovable apps |
| March 21, 2025 | Vulnerability reported to Lovable |
| April 14, 2025 | Independent researcher exploits multiple Lovable showcase apps in 47 minutes |
| April 24, 2025 | Lovable releases "Lovable 2.0" with a security scan feature |
| April 2025 | Guardio Labs publishes VibeScamming research |
| May 29, 2025 | CVE-2025-48757 formally disclosed |
| May 29, 2025 | Superblocks publishes detailed technical analysis |
Two months passed between initial reporting and the CVE disclosure. In that window, 170+ apps with 303 exposed endpoints remained accessible to anyone with a browser and a few minutes. When Lovable did respond, it shipped a "security scan" feature. Here's what it actually checked.
Lovable's response: a scanner that didn't scan enough
Lovable 2.0 introduced a "security scan" feature. On the surface, that sounds responsive. In practice, it was incomplete.
The scanner checked whether RLS was enabled on tables. That's it. It did not validate whether the RLS policies were actually correct. It did not detect misconfigured policies that appeared to be present but didn't enforce meaningful restrictions. It did not check for the dozens of other security patterns that matter in production.
In other words: you could have RLS "enabled" with a policy that allows everything, and Lovable's scanner would give you a green checkmark.
Beyond the scanner, here's what was missing from Lovable's response:
- No forced remediation for existing vulnerable apps already in production
- No proactive user notification telling builders their data may have been exposed
- No automatic policy generation to retrofit security onto existing apps
- No mandatory security review before deployment
Lovable's public statement on X: "We're not yet where we want to be in terms of security and we're committed to keep improving the security posture for all Lovable users."
That's an acknowledgment, not a fix. The 170+ vulnerable apps that were already deployed? Their builders were left to figure it out themselves.
Why this matters beyond Lovable
It's easy to frame this as "one platform made a mistake." But the underlying dynamic applies to every AI coding tool in the market.
AI code generators optimize for functionality, not security. When you prompt an AI to "build a debt tracking app," it builds something that tracks debt. It creates the tables, writes the queries, renders the UI. It doesn't threat-model. It doesn't think about what happens when someone copies your Supabase credentials from the page source. Security isn't what you asked for, so security isn't what you get.
The failure is systematic, not random. When one developer forgets RLS, one app is vulnerable. When an AI platform's code generation template doesn't include RLS, every app it generates is vulnerable. The same flaw, reproduced identically across hundreds of projects. Attack techniques that work on one app work on all of them.
Users can't fix what they don't know about. The whole point of vibe coding is that you don't need to understand the technical details. A builder who prompts Lovable to "build a debt tracker" may not know what Row Level Security is. They trusted the platform to handle the backend correctly — the same way they trust it to set up routing, handle state management, and configure the build pipeline.
This is a new category of risk. Not a bug in one codebase, but a vulnerability in the generation process — replicated at scale, affecting users who may not have the technical background to identify or remediate it.
What builders should do right now
If you've built anything with Lovable (or any AI coding platform that uses Supabase), here are the concrete steps to check and fix your security.
1. Check your RLS status
(Need a full primer on what RLS is and how to configure it? Read our Supabase RLS guide.)
Open the Supabase SQL Editor for your project and run this query:
SELECT schemaname, tablename, rowsecurity
FROM pg_tables
WHERE schemaname = 'public';
If any row shows rowsecurity = false, that table is exposed. Every table that stores user data, financial records, messages, or any sensitive information needs RLS enabled.
2. Enable RLS on every table
-- Enable RLS on each table
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE debts ENABLE ROW LEVEL SECURITY;
ALTER TABLE transactions ENABLE ROW LEVEL SECURITY;
ALTER TABLE messages ENABLE ROW LEVEL SECURITY;
-- Repeat for every table in your public schema
Enabling RLS without any policies will lock the table down completely (only the service role key can access it). That's a safe starting point — you can add policies from there.
3. Create policies that match your access model
The most common pattern is "users can only access their own data":
-- Users can read their own profile
CREATE POLICY "Users read own profile"
ON users FOR SELECT
USING (auth.uid() = id);
-- Users can update their own profile
CREATE POLICY "Users update own profile"
ON users FOR UPDATE
USING (auth.uid() = id);
-- Users can read their own debts
CREATE POLICY "Users read own debts"
ON debts FOR SELECT
USING (auth.uid() = user_id);
-- Users can insert their own debts
CREATE POLICY "Users insert own debts"
ON debts FOR INSERT
WITH CHECK (auth.uid() = user_id);
Be specific. Don't use FOR ALL unless you genuinely want the same rule for SELECT, INSERT, UPDATE, and DELETE. And avoid overly permissive policies — a policy like USING (true) defeats the entire purpose.
4. Test from the outside
After configuring RLS, verify it works. Try to access data without authentication:
# This should return an empty array or an error, NOT your data
curl "https://yourproject.supabase.co/rest/v1/users?select=*" \
-H "apikey: your-anon-key" \
-H "Authorization: Bearer your-anon-key"
If you get data back, your policies aren't working correctly. Go back and check.
5. Check the rest of the surface
RLS is the biggest one, but while you're in there:
- Secrets in the database. API keys and tokens belong in environment variables, not in a
configtable readable by the anon key. - Server-side routes. Anything under
/api/*that reads data should verify the user before it queries. - Admin/debug endpoints. Remove them or lock them behind real auth.
- Storage buckets. They have their own policies. Default-public buckets leak files the same way default-public tables leak rows.
The bigger picture: a new category of software risk
CVE-2025-48757 is not a zero-day in an operating system. It's not a supply chain attack through a compromised package. It's a vulnerability baked into a code generation template, reproduced identically across every app the platform produced.
That maps precisely to three of OWASP's most critical categories:
- A01:2021 - Broken Access Control: No row-level security means any user reaches any row.
- A05:2021 - Security Misconfiguration: Insecure defaults — no RLS enabled, no policies created.
- A04:2021 - Insecure Design: Security was never part of the generation process at all.
None of these are exotic. They're foundational. They were simply absent.
The builders who shipped on Lovable aren't at fault. They used a tool that promised to handle the backend. It did — for everything except security. The uncomfortable part is structural: when an AI platform has a blind spot in its template, every app it generates inherits that blind spot. Same schema. Same missing policies. Same two-line exploit, working identically across 170 apps.
When the template is broken, you don't get one vulnerability. You get a vulnerability class.
How Flowpatrol catches this
This is exactly what Flowpatrol was built for. Paste a URL, wait five minutes, get a report. For RLS-class bugs, we:
- Pull Supabase and Firebase credentials out of your client bundle
- Hit every discovered table with the anon key and check what comes back
- Try cross-user reads to confirm row isolation actually works
- Flag any endpoint that returns data it shouldn't
A five-minute scan would have caught every one of the 303 endpoints in the Lovable dataset. The fix is five lines of SQL. The only thing missing was knowing.
You shipped something real. Now make sure it's solid. Paste your URL, see what comes back — and if you want the longer hardening walkthrough, read How to Secure Your Lovable App.
This case study draws from public reporting by Superblocks, Guardio Labs, Semafor, and GBHackers. CVE-2025-48757 was formally disclosed on May 29, 2025.