The OWASP Top 10 Through the Lens of AI-Generated Code
The OWASP Top 10 isn't just for security teams. Your AI coding tool just generated code that touches most of these categories. Here's what each one looks like in a vibe-coded app.
What does the OWASP Top 10 look like in AI-generated code?
AI-generated code typically triggers 7 out of 10 OWASP categories right out of the box. The top three offenders: Broken Access Control (missing ownership checks on every CRUD endpoint), Security Misconfiguration (RLS disabled, debug mode on, default credentials), and Injection (raw SQL queries with string concatenation). In our scan of 100 vibe-coded apps, 78% had at least one high-severity OWASP issue.
The OWASP Top 10 is a list of the ten most critical security risks in web applications, updated every few years based on real-world breach data. It wasn't written for builders using AI coding tools — but it maps perfectly to the patterns those tools generate. AI optimizes for "does this work?" — not "does this survive an attacker?" The OWASP Top 10 is the gap between those two questions, mapped out and numbered.
This guide walks through all ten categories from your perspective. For each one, you'll see what it actually looks like in a vibe-coded app — the specific code patterns, the default configurations, the things AI generates that you'd never think to question.
A01: Broken Access Control
The #1 risk on the list. Also the #1 issue in vibe-coded apps.
Broken access control means users can do things they shouldn't — see other people's data, access admin pages, modify records that aren't theirs. OWASP put this at the top of the list because it showed up in 94% of the applications they tested.
In AI-generated code, it looks like this:
// AI-generated API route — fetches an invoice by ID
export async function GET(request, { params }) {
const invoice = await db.invoices.findUnique({
where: { id: params.id },
});
return Response.json(invoice);
}
This works. It returns the invoice. But it doesn't check who is asking. Change the ID in the URL from your invoice to someone else's, and you get their data. This is called an IDOR (Insecure Direct Object Reference), and AI generates this pattern constantly because it builds CRUD endpoints that fetch by ID without adding ownership checks.
The fix is one line:
export async function GET(request, { params }) {
const user = await getAuthUser(request);
const invoice = await db.invoices.findUnique({
where: {
id: params.id,
user_id: user.id, // ownership check
},
});
if (!invoice) {
return Response.json({ error: "Not found" }, { status: 404 });
}
return Response.json(invoice);
}
If you're using Supabase, Row Level Security is the database-level version of this same fix. When RLS is disabled — which is the default in many AI-generated Supabase setups — every row in every table is accessible to anyone with your project's public API key. Our scan data shows 68% of BaaS-backed vibe-coded apps have RLS disabled on at least one table.
A02: Cryptographic Failures
When sensitive data isn't protected — in transit, at rest, or in your JavaScript bundle.
This category covers everything from sending data over HTTP instead of HTTPS to storing passwords in plaintext. In vibe-coded apps, the most common version is simpler and more immediate: secrets in client-side code.
// AI put your secret key in a client component
const stripe = new Stripe(process.env.NEXT_PUBLIC_STRIPE_SECRET_KEY);
That NEXT_PUBLIC_ prefix means this key is baked into your JavaScript bundle. Anyone who visits your site can open DevTools and read it. This isn't a hypothetical — we find exposed Stripe keys, OpenAI keys, and Supabase service role keys in over half the vibe-coded apps we scan.
The pattern also shows up with password handling. AI sometimes stores passwords with weak hashing or skips hashing entirely when implementing custom auth:
// AI-generated signup — plaintext password storage
await db.users.create({
data: {
email: email,
password: password, // stored as-is
},
});
What to check: Search your codebase for NEXT_PUBLIC_ and verify every variable with that prefix is something you'd be comfortable posting publicly. If you see NEXT_PUBLIC_STRIPE_SECRET_KEY, NEXT_PUBLIC_OPENAI_API_KEY, or NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY, fix it now.
A03: Injection
The oldest vulnerability in the book — and AI keeps writing it.
SQL injection was first documented in 1998. We have parameterized queries. We have ORMs. It should be extinct. But AI coding tools learned to code by reading the internet, and the internet has decades of tutorials that use string concatenation.
Here's what AI generates when you ask for a search feature:
// AI-generated search — vulnerable
const result = await pool.query(
`SELECT * FROM products WHERE name LIKE '%${search}%'`
);
An attacker types ' UNION SELECT username, password FROM users -- into the search box, and your app returns every credential in the database. The fix:
// Parameterized — safe
const result = await pool.query(
"SELECT * FROM products WHERE name LIKE $1",
[`%${search}%`]
);
The tricky version is dynamic sorting. When you ask AI to add sortable columns, it generates this:
// AI-generated sorting — vulnerable
const result = await pool.query(
`SELECT * FROM products ORDER BY ${sortBy} ${order}`
);
You can't parameterize column names — the database won't accept ORDER BY $1. So the fix is an allowlist:
const allowedColumns = ["name", "price", "created_at"];
const column = allowedColumns.includes(sortBy) ? sortBy : "created_at";
AI almost never generates allowlists. It sees "user picks a column" and goes straight for interpolation.
A04: Insecure Design
Architectural flaws that no amount of good code can fix.
This is the hardest category because it's about decisions, not code. OWASP added it in 2021 to capture risks that exist even when the implementation is technically correct.
In vibe-coded apps, the most common design flaw is missing rate limiting. AI builds login forms, password reset flows, and API endpoints without any throttling. An attacker can try ten thousand passwords per minute. Your AI-generated login page will happily process every one.
// AI-generated login — no rate limiting
export async function POST(request) {
const { email, password } = await request.json();
const user = await verifyCredentials(email, password);
if (!user) {
return Response.json({ error: "Invalid credentials" }, { status: 401 });
}
// create session...
}
The response tells the attacker they got it wrong ("Invalid credentials"), and there's nothing stopping them from trying again immediately. No lockout. No delay. No CAPTCHA.
Other design issues AI creates:
- No account lockout after failed login attempts
- Predictable resource IDs (sequential integers instead of UUIDs)
- Client-side price calculations that an attacker can modify
- Missing re-authentication for sensitive actions like changing email or deleting accounts
These aren't bugs you can grep for. They're things that were never built in the first place.
A05: Security Misconfiguration
The defaults AI ships with are almost never production-ready.
Security misconfiguration is the gap between "it works" and "it's configured correctly." OWASP lists it at #5, but for vibe-coded apps, it might be the most pervasive category because AI doesn't configure infrastructure — it writes application code.
Here's what "misconfigured" looks like in practice:
Missing security headers. Open DevTools on a typical vibe-coded app and check the response headers:
HTTP/2 200
content-type: text/html
That's it. No Content Security Policy. No HSTS. No X-Frame-Options. Compare with what a production app should return:
content-security-policy: default-src 'self'; script-src 'self'
strict-transport-security: max-age=31536000; includeSubDomains
x-frame-options: DENY
x-content-type-options: nosniff
referrer-policy: strict-origin-when-cross-origin
Verbose error messages. AI-generated apps often return full stack traces in production. An attacker gets your database schema, file paths, and framework version from a single malformed request.
Default CORS: accept everything. When AI sets up an API, it often adds Access-Control-Allow-Origin: * to avoid CORS errors during development. That same wildcard goes to production, letting any website make requests to your API.
The fix for headers in Next.js:
// next.config.js
module.exports = {
async headers() {
return [{
source: "/(.*)",
headers: [
{ key: "X-Frame-Options", value: "DENY" },
{ key: "X-Content-Type-Options", value: "nosniff" },
{ key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
{
key: "Content-Security-Policy",
value: "default-src 'self'; script-src 'self' 'unsafe-inline';"
},
{
key: "Strict-Transport-Security",
value: "max-age=31536000; includeSubDomains"
},
],
}];
},
};
We see missing security headers in 47% of vibe-coded apps we scan. It's the easiest fix on this list — one config block, and every response gets the right headers.
A06: Vulnerable and Outdated Components
Your AI picked the dependencies. Did it pick safe versions?
Every vibe-coded app has a package.json full of dependencies the AI selected. Some of those packages have known security issues. Some haven't been updated in years. Some were abandoned by their maintainers and acquired by unknown parties (see: polyfill.io, which served malicious JavaScript to 380,000 websites after a domain acquisition).
AI picks dependencies based on popularity in its training data, not on whether they're currently maintained or free of known vulnerabilities. A package that was popular in 2022 might have three unpatched CVEs in 2026.
What to check:
npm audit
That's the simplest version. It checks your dependency tree against the npm vulnerability database. If it reports critical or high severity issues, update or replace those packages.
For deeper visibility, check the last commit date and open issue count on any unfamiliar dependency in your package.json. If a package hasn't been updated in two years and has 200 open issues, find an alternative.
A07: Identification and Authentication Failures
AI builds the login page. It doesn't always build the security around it.
Authentication looks simple from the outside — a login form, a session, a redirect. But OWASP lists a full category of ways it goes wrong: weak passwords, missing brute-force protection, broken session management, credential stuffing.
The most dangerous pattern in vibe-coded apps is client-side-only auth:
// Client component — this is NOT a security control
"use client";
export default function Dashboard() {
const { user } = useAuth();
if (!user) {
redirect("/login");
return null;
}
return <DashboardContent />;
}
This redirects unauthenticated users to the login page. But the API endpoints that load dashboard data? AI often leaves those unprotected. An attacker doesn't use your UI — they call your API directly. If /api/dashboard/data responds without checking the session, the login page is just decoration.
Another common pattern — no password requirements:
// AI-generated signup — accepts any password
const { email, password } = await request.json();
await createUser(email, password);
No minimum length. No complexity check. A user can sign up with the password "1" and AI won't stop them.
What to check: For every API endpoint in your app, ask: what happens if I call this without a session token? If it returns data, your auth has a gap. Test it with curl or your browser's Network tab.
A08: Software and Data Integrity Failures
Code that doesn't verify what it's running.
This category covers a range of integrity issues: insecure CI/CD pipelines, deserialization attacks, and loading third-party code without verification. For vibe-coded apps, the most relevant pattern is unverified external scripts.
<!-- AI added this analytics snippet — no integrity check -->
<script src="https://cdn.example.com/analytics.js"></script>
If that CDN gets compromised, every visitor to your site runs the attacker's code. Subresource Integrity (SRI) hashes prevent this by letting the browser verify the file hasn't changed:
<script
src="https://cdn.example.com/analytics.js"
integrity="sha384-abc123..."
crossorigin="anonymous"
></script>
AI rarely adds SRI hashes. It copies the integration pattern from its training data, which usually doesn't include them.
A09: Security Logging and Monitoring Failures
If someone is poking at your app right now, would you know?
Most vibe-coded apps have zero logging for security-relevant events. No record of failed login attempts. No alerts for unusual API activity. No audit trail for data access.
This isn't something AI builds unless you ask for it. When you prompt "build me a SaaS app," you get features — not observability. That means if someone is testing your endpoints for injection, brute-forcing your login, or scraping your data through an IDOR, there's no signal that it's happening.
The minimum to add:
- Log failed authentication attempts (with the IP address, not the password)
- Log access to sensitive endpoints (admin panels, user data exports)
- Set up alerts for anomalies: 50 failed logins in a minute, API calls from unexpected regions, bulk data access
If you're on Vercel, Vercel Logs gives you request-level data. If you're on Supabase, check the Auth logs in the dashboard. Neither replaces application-level logging, but they're a start.
A10: Server-Side Request Forgery (SSRF)
When your app makes HTTP requests on behalf of the user — and doesn't check where.
SSRF happens when an application fetches a URL provided by the user without validating the destination. AI generates this pattern in several common features:
- URL preview / link unfurling — paste a link, see a preview card
- Webhook configuration — enter a URL to receive notifications
- File import — provide a URL to import data from
// AI-generated URL preview — no validation
export async function POST(request) {
const { url } = await request.json();
const response = await fetch(url); // fetches anything
const html = await response.text();
// parse and return preview data...
}
An attacker passes http://169.254.169.254/latest/meta-data/ (the AWS metadata endpoint) and your server dutifully fetches its own cloud credentials and returns them.
The fix: Validate that the URL points to a public internet address. Block private IP ranges, localhost, and cloud metadata endpoints. Use an allowlist of accepted domains if possible.
What pattern connects all ten categories?
Read through this list again and you'll notice a theme. AI generates code that handles the expected case — the right user, the normal input, the intended flow. Every OWASP category is about the unexpected case — a different user, a malicious input, a manipulated request.
That's not a flaw in AI. It's a gap in the process. AI builds. Something else needs to verify. That's true whether the "something else" is you manually reviewing code, a security-focused prompt, or an automated scan.
The OWASP Top 10 is a useful framework because it gives you a structured way to think about what could go wrong. But frameworks are only useful if you act on them.
What should you fix first?
You don't need to fix all ten categories today. Start with the ones that cause the most damage in vibe-coded apps:
-
Check your access controls (A01). Open your app's Network tab. Call your API endpoints without a session. Change IDs in the URL. If you get someone else's data, that's your top priority.
-
Search your bundle for secrets (A02). Run
grep -r "sk_live\|sk-\|service_role" .next/static/after building. If anything comes back, move those keys server-side. -
Audit your queries for injection (A03). Search for
SELECT.*\${andORDER BY.*\${in your codebase. Every match needs a parameterized query or an allowlist. -
Add security headers (A05). Copy the
next.config.jsheaders block from this article. One file, five headers, every response protected. -
Scan your app. Flowpatrol checks for all ten OWASP categories — broken access controls, injection, missing headers, exposed secrets, the full list. Paste your URL and see what comes back before someone else finds it.
The OWASP Top 10 was written for security teams. But the vulnerabilities it describes show up in every app, built by every tool, every day. Now you know what to look for.