SQL Injection Is Not Dead: How It Shows Up in AI-Generated Code
SQL injection was discovered in 1998 and should be a solved problem. But AI coding tools are bringing it back — generating string concatenation, raw queries, and dynamic column names that open the door to attacks. Here's how to spot it and fix it.
Is AI-generated code vulnerable to SQL injection?
Yes. SQL injection remains common in vibe-coded apps, especially those using raw database queries instead of ORMs. AI coding tools generate three specific patterns that create SQL injection risk: string concatenation in search queries, template literals in filter/sort endpoints, and dynamic column names built from user input. Each one is a direct path to full database compromise.
SQL injection was first documented in 1998. We've had parameterized queries, ORMs, and framework defaults designed to make it extinct for decades. But AI coding assistants learned to code by reading the internet — including millions of examples of insecure code from every era of web development. The AI doesn't know which patterns are from 2004 and which are from 2024. It optimizes for code that works, not code that's safe.
Here's exactly how this happens, what the vulnerable patterns look like, and how to check your own code.
How did SQL injection come back?
In 1998, security researcher Jeff Forristal (writing as "rain.forest.puppy") published the first known description of SQL injection in Phrack Magazine. The technique was devastatingly simple: if an application builds SQL queries by concatenating user input, an attacker can inject their own SQL commands.
The industry responded. Parameterized queries became standard. ORMs like Sequelize, Prisma, and Drizzle were built with injection prevention as a core feature. By 2010, any experienced developer would tell you: SQL injection is a solved problem.
So why is it back?
Because AI coding assistants learned to code by reading the internet. And the internet is full of tutorials, Stack Overflow answers, blog posts, and GitHub repos from every era of web development — including the decades before parameterized queries were the default. The AI doesn't know which patterns are from 2004 and which are from 2024. It optimizes for code that works, not code that's safe.
The result: AI-generated code that's functionally correct and structurally vulnerable.
What SQL injection patterns does AI generate?
Let's look at the specific vulnerable patterns AI coding tools generate most often. Each one includes the vulnerable code and the fixed version.
Pattern 1: String concatenation in search queries
This is the classic. You ask your AI to build a search endpoint, and it produces something like this:
// VULNERABLE — AI-generated search endpoint
app.get("/api/users/search", async (req, res) => {
const { name } = req.query;
const result = await pool.query(
`SELECT * FROM users WHERE name = '${name}'`
);
res.json(result.rows);
});
This works perfectly for normal input. Search for "alice" and you get Alice's record. But search for ' OR 1=1 -- and you get every user in the database.
The fix is parameterized queries — sending the SQL structure and the user input separately, so the database never confuses data for commands:
// FIXED — parameterized query
app.get("/api/users/search", async (req, res) => {
const { name } = req.query;
const result = await pool.query(
"SELECT * FROM users WHERE name = $1",
[name]
);
res.json(result.rows);
});
The difference is subtle — $1 instead of ${name}, and the value passed as a separate array. But that small change makes injection impossible. The database treats the input as a value, never as part of the SQL command.
AI tools generate the vulnerable version regularly because their training data is full of tutorials that use template literals for simplicity. The concatenated version is shorter, easier to read, and appears more often in learning materials. The AI is optimizing for clarity, not security.
Pattern 2: Dynamic ORDER BY and column names
This one trips up even experienced developers. You want users to sort a table by clicking column headers, so you ask the AI to add dynamic sorting:
// VULNERABLE — dynamic ORDER BY
app.get("/api/products", async (req, res) => {
const { sortBy, order } = req.query;
const result = await pool.query(
`SELECT * FROM products ORDER BY ${sortBy} ${order}`
);
res.json(result.rows);
});
Here's the problem: parameterized queries can't help you with column names or SQL keywords. You can't write ORDER BY $1 — the database expects an identifier there, not a string value. If you try to parameterize it, the query either fails or treats your column name as a literal string.
An attacker can exploit this by passing something like (CASE WHEN (SELECT password FROM users LIMIT 1) LIKE 'a%' THEN price ELSE name END) as the sortBy parameter. By observing how the sort order changes, they can extract data from other tables character by character. This is called a blind SQL injection, and it's just as dangerous as the obvious kind.
The fix is an allowlist — a hardcoded set of valid values:
// FIXED — allowlist for column names
app.get("/api/products", async (req, res) => {
const { sortBy, order } = req.query;
const allowedColumns = ["name", "price", "created_at", "category"];
const allowedOrders = ["ASC", "DESC"];
const column = allowedColumns.includes(sortBy) ? sortBy : "created_at";
const direction = allowedOrders.includes(order?.toUpperCase())
? order.toUpperCase()
: "ASC";
const result = await pool.query(
`SELECT * FROM products ORDER BY ${column} ${direction}`
);
res.json(result.rows);
});
The user input never reaches the query. The code checks it against a list of known-good values and falls back to a default if it doesn't match.
AI almost never generates allowlists for dynamic column names. It sees the pattern — "user picks a column, query sorts by it" — and goes straight for the interpolation. This is one of the most common blind spots in AI-generated code.
Pattern 3: Raw SQL inside ORMs
This is the one that catches people off guard. You're using Prisma, Drizzle, or Sequelize. You know ORMs prevent SQL injection. You feel safe. And for the most part, you are — until AI reaches for the escape hatch.
Every ORM has a way to run raw SQL. Prisma has $queryRaw and $executeRaw. Sequelize has sequelize.query(). Drizzle has sql.raw(). These exist for legitimate reasons — complex queries that the ORM's query builder can't express.
But when AI uses them, it often does so unsafely:
// VULNERABLE — Prisma raw query with interpolation
app.get("/api/users/search", async (req, res) => {
const { email } = req.query;
const users = await prisma.$queryRaw`
SELECT * FROM "User" WHERE email = '${email}'
`;
res.json(users);
});
This looks like it might be safe — it's a tagged template literal, which Prisma does use for parameterization. But the single quotes around ${email} break the parameterization. Prisma's tagged template automatically parameterizes expressions, but only when you don't wrap them in SQL syntax yourself. Adding the quotes tells Prisma to treat it as part of the SQL string.
The fix:
// FIXED — let Prisma handle parameterization
app.get("/api/users/search", async (req, res) => {
const { email } = req.query;
const users = await prisma.$queryRaw`
SELECT * FROM "User" WHERE email = ${email}
`;
res.json(users);
});
Or better yet, just use the ORM:
// BEST — use Prisma's query builder
app.get("/api/users/search", async (req, res) => {
const { email } = req.query;
const users = await prisma.user.findMany({
where: { email },
});
res.json(users);
});
The danger here is the false sense of security. Developers see Prisma in the import statement and assume injection is impossible. AI reinforces this by reaching for raw queries when a standard query builder call would work fine. It's optimizing for the pattern it's seen most often in similar contexts, not for the safest approach.
How does SQL injection actually work?
If you've never seen a SQL injection attack in action, let's walk through exactly what happens. Understanding the mechanics makes it much easier to spot the vulnerability in your own code.
Attack 1: Login bypass
Imagine an AI-generated login endpoint:
const { username, password } = req.body;
const result = await pool.query(
`SELECT * FROM users
WHERE username = '${username}'
AND password = '${password}'`
);
if (result.rows.length > 0) {
// User is authenticated
}
An attacker enters this as the username:
admin' --
The resulting SQL becomes:
SELECT * FROM users
WHERE username = 'admin' --' AND password = 'anything'
The -- starts a SQL comment. Everything after it is ignored — including the password check. The query now just says "find me the user named admin," and the attacker is logged in without knowing the password.
Attack 2: UNION-based data extraction
UNION injection is how attackers steal data from tables they shouldn't be able to access. If a search page is vulnerable:
const result = await pool.query(
`SELECT name, email FROM products WHERE name LIKE '%${search}%'`
);
An attacker enters:
' UNION SELECT username, password FROM users --
The resulting query:
SELECT name, email FROM products WHERE name LIKE '%'
UNION SELECT username, password FROM users --'
The UNION combines the results of two queries. The product search returns nothing useful, but the second query returns every username and password from the users table — displayed right where product results would normally appear.
These aren't exotic techniques. Automated tools like SQLMap can discover and exploit them in seconds.
Why does AI generate vulnerable SQL?
AI coding assistants generate vulnerable SQL patterns for specific, predictable reasons:
Training data bias. The internet has twenty-eight years of SQL tutorials. The first fifteen years mostly used string concatenation because that's how everyone learned. Parameterized queries became dominant later. The AI has seen both patterns millions of times, and the insecure one is well-represented in its training data.
Optimizing for functionality, not security. When you ask AI to build a search endpoint, it optimizes for "does this work?" String concatenation works. Template literals work. The AI reaches for the pattern that most directly accomplishes the stated goal. Security is a constraint it doesn't apply unless you ask.
No threat modeling. A human developer (ideally) thinks about what happens when a user sends unexpected input. AI doesn't model adversarial behavior. It generates code for the happy path — the normal user searching for a normal product name.
Context window limitations. AI sees the current file, maybe a few related files. It doesn't see your deployment configuration or your threat model. It can't evaluate whether a particular query is exposed to untrusted input from three layers up the call stack.
Does using an ORM protect you from SQL injection?
"I'm using Prisma, so I'm safe from SQL injection."
This belief is widespread and wrong — not because Prisma is insecure, but because AI routinely bypasses the ORM's built-in protections.
Prisma's query builder is safe. When you write prisma.user.findMany({ where: { email } }), there's no injection risk. The ORM constructs parameterized queries internally.
But when AI reaches for $queryRaw, $executeRaw, or $queryRawUnsafe (yes, Prisma literally has a function with "Unsafe" in the name), you're back to writing raw SQL. And AI writes raw SQL the same way it writes any SQL — often with string concatenation.
The same applies to every ORM:
// Sequelize — safe
await User.findAll({ where: { email } });
// Sequelize — AI-generated, vulnerable
await sequelize.query(
`SELECT * FROM users WHERE email = '${email}'`
);
// Drizzle — safe
await db.select().from(users).where(eq(users.email, email));
// Drizzle — AI-generated, vulnerable
await db.execute(sql.raw(
`SELECT * FROM users WHERE email = '${email}'`
));
The ORM is only as safe as the code that uses it. If AI bypasses the query builder, the ORM's protections don't apply.
How do you check your AI-generated code for SQL injection?
Here's a practical checklist. Run these searches across your codebase — they'll catch the majority of SQL injection vulnerabilities AI tends to introduce.
Search for string concatenation in queries
Look for template literals or string concatenation near SQL keywords:
# Find template literals with SQL keywords
grep -rn "SELECT.*\${" --include="*.ts" --include="*.js" .
grep -rn "INSERT.*\${" --include="*.ts" --include="*.js" .
grep -rn "UPDATE.*\${" --include="*.ts" --include="*.js" .
grep -rn "DELETE.*\${" --include="*.ts" --include="*.js" .
grep -rn "WHERE.*\${" --include="*.ts" --include="*.js" .
grep -rn "ORDER BY.*\${" --include="*.ts" --include="*.js" .
Any match is a potential vulnerability. Review each one to confirm whether the interpolated value comes from user input.
Search for raw query methods
# Prisma
grep -rn "\$queryRaw\|$executeRaw\|\$queryRawUnsafe" --include="*.ts" .
# Sequelize
grep -rn "sequelize.query" --include="*.ts" --include="*.js" .
# Drizzle
grep -rn "sql.raw\|sql\.raw" --include="*.ts" .
# Generic
grep -rn "\.raw(" --include="*.ts" --include="*.js" .
Raw queries aren't automatically bad, but every one needs manual review. Make sure they use parameterization, not concatenation.
The fix checklist
For every query that touches user input:
- Use parameterized queries.
$1,?, or ORM-native parameter binding. Never concatenate. - Allowlist dynamic identifiers. Column names, table names, sort directions — if the value becomes part of SQL structure (not data), validate it against a hardcoded list.
- Prefer the ORM's query builder. Only use raw queries when the builder genuinely can't express what you need. If AI generated a raw query, check whether the ORM's builder could do the same thing.
- Validate and sanitize input. Even with parameterized queries, validate that input matches expected formats. An email field should look like an email. A numeric ID should be a number.
- Use database permissions. Your application's database user shouldn't have permission to DROP tables. Use the principle of least privilege — read-only connections for read-only operations.
Why does this matter for vibe-coded apps?
SQL injection is a solved problem — in theory. The tools exist. The patterns are well-documented. Every modern framework provides safe defaults.
But AI coding tools are a time machine. They pull patterns from every era of web development and present them as new code. When your AI generates a search endpoint with string concatenation, it's not being lazy or malicious. It's producing code that matches the statistical pattern of millions of similar examples in its training data. Many of those examples were written before parameterized queries were the norm.
The fix isn't to stop using AI. The fix is to know what to look for. SQL injection has a clear signature — user input flowing into query strings without parameterization. The searches above take five minutes to run. The vulnerabilities they catch could prevent a complete database compromise.
How can you catch SQL injection automatically?
This is one of the patterns Flowpatrol scans for. When you point it at your app, it tests for SQL injection across every endpoint — search fields, sort parameters, filter inputs, and login forms. It checks for the classic patterns and the subtle ones like ORDER BY injection that don't show up in basic testing.
Five minutes to scan. One less way for someone to walk through your front door.
SQL injection: discovered in 1998, still showing up in code generated in 2026. Now you know what to look for.