The Lovable RLS Vulnerability: How One AI Platform Shipped the Same Security Flaw Across 170+ Apps
CVE-2025-48757 exposed a systematic Row Level Security failure in Lovable, one of the most popular vibe coding platforms. 170+ apps. 303 vulnerable endpoints. A 1.8/10 security score. Here's what happened, why it matters, and what every builder should do about it.
This wasn't one bad app. It was hundreds.
When a single application ships with a security flaw, that's a bug. When an AI platform generates the same security flaw across 170+ applications, that's a systemic failure. And that's exactly what happened with Lovable.
CVE-2025-48757 revealed that Lovable — one of the most popular AI-powered coding platforms for turning prompts into full-stack apps — was systematically generating applications without Row Level Security enabled on their Supabase databases. Not some of the time. Not in edge cases. As a default pattern baked into the AI's code generation.
The result: 170+ confirmed vulnerable applications, 303 exploitable endpoints, personal debt records, home addresses, API keys, and user credentials — all accessible to anyone who knew where to look. And "where to look" was the page source.
This is the story of what went wrong, why it matters far beyond Lovable, and what you can do right now if you've built anything on the platform.
What Lovable does (and why people love it)
Lovable launched in 2024 as a generative AI platform that creates full-stack web applications from text prompts. You describe what you want — "build me a debt tracking app" — and Lovable generates a working React frontend backed by Supabase for database, auth, and real-time features. No code required.
It's part of a wave of "vibe coding" platforms (alongside Bolt, Base44, and Replit) that let anyone turn an idea into a deployed app in hours. The promise is real, and the platforms are genuinely impressive. Thousands of builders have used Lovable to ship products that would have taken weeks or months with traditional development.
But there's a gap between "working" and "secure." And Lovable's AI fell straight through it.
How the vulnerability worked
The issue is straightforward, and that's part of what makes it so serious.
Every Lovable app uses Supabase as its backend. Supabase gives you a PostgreSQL database with a built-in security mechanism called Row Level Security (RLS). When RLS is enabled and configured with policies, the database itself enforces who can see and modify which rows. It's elegant and effective — when it's turned on.
Lovable's AI never turned it on.
Step 1: The credentials were in the page source.
Every Supabase project requires a URL and an "anon key" in the client-side JavaScript. This is by design — Supabase's documentation says the anon key is safe to expose publicly. But that safety assumption depends entirely on RLS being enabled.
// Visible in any Lovable app's JavaScript bundle
const supabaseUrl = "https://yourproject.supabase.co";
const supabaseKey = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...";
Step 2: RLS was disabled on every table.
Lovable's AI generated database schemas with tables for users, financial data, transactions, messages — whatever the app needed. It created the tables. It wrote the queries. It built the UI. It did not enable Row Level Security. It did not create any security policies.
-- What Lovable generated:
CREATE TABLE users (
id UUID PRIMARY KEY,
email TEXT,
name TEXT,
created_at TIMESTAMP
);
CREATE TABLE debts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
amount DECIMAL,
description TEXT
);
-- What Lovable did NOT generate:
-- ALTER TABLE users ENABLE ROW LEVEL SECURITY;
-- ALTER TABLE debts ENABLE ROW LEVEL SECURITY;
-- CREATE POLICY ... (any policy at all)
Step 3: Anyone could query everything.
Without RLS, the Supabase anon key becomes an all-access pass. An attacker (or a curious researcher) could extract the credentials from the page source and query any table directly:
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(url, anonKey);
// Returns ALL users — every email, every name
const { data: users } = await supabase.from("users").select("*");
// Returns ALL financial records
const { data: debts } = await supabase.from("debts").select("*");
// Full write access too — could modify or delete anything
await supabase.from("users").delete().neq("id", "");
No authentication. No token to steal. No clever exploit. Just two strings from the page source and a few lines of JavaScript.
The scale of the problem
This is where the story gets concerning. A single misconfigured app is a bug. This was a production line.
| Metric | Value |
|---|---|
| Vulnerable endpoints discovered | 303 across tested apps |
| Confirmed affected applications | 170+ |
| Time to exploit a single app | 47 minutes (documented audit) |
| Data types exposed | Debt amounts, home addresses, API keys, user credentials |
| Authentication required | None |
| CVSS severity | Critical (9.1+) |
In one documented test, an independent researcher went through Lovable's public showcase of featured apps and hacked multiple applications in 47 minutes. These weren't obscure side projects — they were apps Lovable was promoting.
A single misconfigured application exposed 13,000 user records including sensitive personal information. Across all affected apps, the exposed data included personal debt amounts and payment histories, physical home addresses, third-party API keys stored in databases, private user inputs and conversations, and email/password combinations.
Every one of these apps was built by someone who trusted Lovable to generate production-quality code. The platform delivered functional code. It did not deliver secure code.
The VibeScamming problem
The RLS vulnerability wasn't the only security issue researchers found with Lovable. Guardio Labs, a browser security company, conducted a separate study testing how well AI coding platforms resist being used to create malicious applications — a technique they called "VibeScamming."
The results were striking:
| Platform | VibeScamming Score | Assessment |
|---|---|---|
| ChatGPT | 8.0/10 | Moderate guardrails |
| Claude | 4.3/10 | Better protection |
| Lovable | 1.8/10 | Minimal guardrails |
According to Guardio Labs: "From pixel-perfect scam pages to live hosting, evasion techniques, and even admin dashboards to track stolen data — Lovable didn't just participate, it performed. No guardrails, no hesitation."
When prompted appropriately, Lovable generated credential harvesting pages mimicking real services, phishing sites with convincing UI, data exfiltration dashboards, and fully hosted malicious applications.
The 1.8/10 score means Lovable offered essentially no resistance to generating harmful applications. Combined with the RLS vulnerability, the picture is clear: the platform was optimized for speed and functionality, with security as an afterthought.
The timeline
| Date | Event |
|---|---|
| March 20, 2025 | Security researcher Matt Palmer publicly confirms RLS misconfiguration in Lovable apps |
| March 21, 2025 | Vulnerability reported to Lovable |
| April 14, 2025 | Independent researcher exploits multiple Lovable showcase apps in 47 minutes |
| April 24, 2025 | Lovable releases "Lovable 2.0" with a security scan feature |
| April 2025 | Guardio Labs publishes VibeScamming research |
| May 29, 2025 | CVE-2025-48757 formally disclosed |
| May 29, 2025 | Superblocks publishes detailed technical analysis |
Two months passed between initial reporting and the CVE disclosure. In that time, Lovable shipped a security scanner. Let's talk about what that scanner actually did.
Lovable's response: a scanner that didn't scan enough
Lovable 2.0 introduced a "security scan" feature. On the surface, that sounds responsive. In practice, it was incomplete.
The scanner checked whether RLS was enabled on tables. That's it. It did not validate whether the RLS policies were actually correct. It did not detect misconfigured policies that appeared to be present but didn't enforce meaningful restrictions. It did not check for the dozens of other security patterns that matter in production.
In other words: you could have RLS "enabled" with a policy that allows everything, and Lovable's scanner would give you a green checkmark.
Beyond the scanner, here's what was missing from Lovable's response:
- No forced remediation for existing vulnerable apps already in production
- No proactive user notification telling builders their data may have been exposed
- No automatic policy generation to retrofit security onto existing apps
- No mandatory security review before deployment
Lovable's public statement on X: "We're not yet where we want to be in terms of security and we're committed to keep improving the security posture for all Lovable users."
That's an acknowledgment, not a fix. The 170+ vulnerable apps that were already deployed? Their builders were left to figure it out themselves.
Why this matters beyond Lovable
It's easy to frame this as "one platform made a mistake." But the underlying dynamic applies to every AI coding tool in the market.
AI code generators optimize for functionality, not security. When you prompt an AI to "build a debt tracking app," it builds something that tracks debt. It creates the tables, writes the queries, renders the UI. It doesn't threat-model. It doesn't think about what happens when someone copies your Supabase credentials from the page source. Security isn't what you asked for, so security isn't what you get.
The failure is systematic, not random. When one developer forgets RLS, one app is vulnerable. When an AI platform's code generation template doesn't include RLS, every app it generates is vulnerable. The same flaw, reproduced identically across hundreds of projects. Attack techniques that work on one app work on all of them.
Users can't fix what they don't know about. The whole point of vibe coding is that you don't need to understand the technical details. A builder who prompts Lovable to "build a debt tracker" may not know what Row Level Security is. They trusted the platform to handle the backend correctly — the same way they trust it to set up routing, handle state management, and configure the build pipeline.
This is a new category of risk. Not a bug in one codebase, but a vulnerability in the generation process — replicated at scale, affecting users who may not have the technical background to identify or remediate it.
What builders should do right now
If you've built anything with Lovable (or any AI coding platform that uses Supabase), here are the concrete steps to check and fix your security.
1. Check your RLS status
(Need a full primer on what RLS is and how to configure it? Read our Supabase RLS guide.)
Open the Supabase SQL Editor for your project and run this query:
SELECT schemaname, tablename, rowsecurity
FROM pg_tables
WHERE schemaname = 'public';
If any row shows rowsecurity = false, that table is exposed. Every table that stores user data, financial records, messages, or any sensitive information needs RLS enabled.
2. Enable RLS on every table
-- Enable RLS on each table
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE debts ENABLE ROW LEVEL SECURITY;
ALTER TABLE transactions ENABLE ROW LEVEL SECURITY;
ALTER TABLE messages ENABLE ROW LEVEL SECURITY;
-- Repeat for every table in your public schema
Enabling RLS without any policies will lock the table down completely (only the service role key can access it). That's a safe starting point — you can add policies from there.
3. Create policies that match your access model
The most common pattern is "users can only access their own data":
-- Users can read their own profile
CREATE POLICY "Users read own profile"
ON users FOR SELECT
USING (auth.uid() = id);
-- Users can update their own profile
CREATE POLICY "Users update own profile"
ON users FOR UPDATE
USING (auth.uid() = id);
-- Users can read their own debts
CREATE POLICY "Users read own debts"
ON debts FOR SELECT
USING (auth.uid() = user_id);
-- Users can insert their own debts
CREATE POLICY "Users insert own debts"
ON debts FOR INSERT
WITH CHECK (auth.uid() = user_id);
Be specific. Don't use FOR ALL unless you genuinely want the same rule for SELECT, INSERT, UPDATE, and DELETE. And avoid overly permissive policies — a policy like USING (true) defeats the entire purpose.
4. Test from the outside
After configuring RLS, verify it works. Try to access data without authentication:
# This should return an empty array or an error, NOT your data
curl "https://yourproject.supabase.co/rest/v1/users?select=*" \
-H "apikey: your-anon-key" \
-H "Authorization: Bearer your-anon-key"
If you get data back, your policies aren't working correctly. Go back and check.
5. Review what else your AI generated
RLS is the most critical issue, but it's not the only thing to check:
- Are there API keys or secrets stored in your database? They shouldn't be. Move them to environment variables.
- Does your app have server-side API routes? Make sure they validate authentication before accessing data.
- Are there any admin or debug endpoints? Remove them or lock them behind proper auth.
- Does your app handle file uploads? Check that storage buckets have appropriate access policies too.
6. Set up monitoring
Even after fixing the immediate issue, you should know if someone is accessing data they shouldn't:
- Enable Supabase's built-in logging
- Set up alerts for unusual query patterns
- Monitor for bulk data access (SELECT * on large tables)
- Watch for access from unexpected IP addresses or user agents
The bigger picture: what this means for vibe coding security
CVE-2025-48757 is a case study in a new category of software risk. Not a zero-day in an operating system. Not a supply chain attack through a compromised package. A vulnerability baked into the code generation template of an AI platform, reproduced identically across every app it produced.
The pattern maps directly to OWASP's top risks:
- A01:2021 - Broken Access Control: No row-level security enforcement. Any user can access any data.
- A05:2021 - Security Misconfiguration: Default insecure configuration. Missing security policies.
- A04:2021 - Insecure Design: Security not considered in the code generation process. No secure defaults.
These aren't exotic vulnerabilities. They're foundational security requirements that were simply absent.
The builders who used Lovable aren't at fault here. They used a tool that promised to handle the technical details. The platform's responsibility was to generate code that was not just functional but safe. It failed at that, and it failed at scale.
Going forward, the industry needs AI coding platforms to:
- Enable security by default. RLS should be on, not off. Policies should be generated, not omitted.
- Scan before deploying. Block deployments that have obvious security misconfigurations.
- Notify users when vulnerabilities are found. Don't just add a feature — tell the people whose apps are already exposed.
- Treat security as a core feature, not an add-on. If a platform generates the database, it owns the security of that database.
How Flowpatrol catches this
This is the exact category of vulnerability Flowpatrol is designed to find. When you scan an app built with Lovable, Bolt, or any AI coding platform, Flowpatrol:
- Analyzes client-side JavaScript for exposed Supabase and Firebase credentials
- Tests RLS enforcement by attempting unauthenticated data access against discovered endpoints
- Checks for cross-user data leakage to verify that row-level isolation actually works
- Validates that authentication flows are enforced across all data-access paths
A five-minute scan would have caught every one of the 303 vulnerable endpoints in the Lovable dataset. The fix was SQL statements that take less than a minute to write. The gap was knowing the problem existed.
You built something real. Now make sure it's solid. For a step-by-step hardening guide, check out How to Secure Your Lovable App.
This case study draws from public reporting by Superblocks, Guardio Labs, Semafor, and GBHackers. CVE-2025-48757 was formally disclosed on May 29, 2025.