What are injection vulnerabilities?
An injection vulnerability exists whenever user-supplied data gets treated as code. Instead of being handled as plain text, the input becomes part of a SQL query, an HTML page, a URL request, or a shell command. The application can't tell the difference between your data and its own instructions.
This is one of the oldest classes of web security bugs — and still one of the most common. SQL injection alone has been responsible for some of the largest data breaches in history. Cross-site scripting (XSS) lets attackers run JavaScript in other users' browsers. Server-side request forgery (SSRF) turns your server into a proxy for attacking internal infrastructure. They all share the same root cause: untrusted input in a trusted context.
What it looks like in code
Here's a search endpoint that an AI tool might generate. The user's query is dropped directly into the SQL string with a template literal. It works — until someone sends a crafted input.
// AI-generated search endpoint
app.get('/api/search', async (req, res) => {
const { query } = req.query;
const results = await db.query(
`SELECT * FROM products WHERE name LIKE '%${query}%'`
);
res.json(results);
});
// Attacker sends: ?query=' OR 1=1 --
// Result: returns every row in the table// Parameterized query — user input never touches the SQL
app.get('/api/search', async (req, res) => {
const { query } = req.query;
const results = await db.query(
'SELECT * FROM products WHERE name LIKE $1',
[`%${query}%`]
);
res.json(results);
});
// Attacker sends: ?query=' OR 1=1 --
// Result: searches for the literal string "' OR 1=1 --"Why AI tools generate injection bugs
It's not that AI tools don't "know" about parameterized queries. They do. But several forces push them toward the unsafe version:
- Training data is full of vulnerable examples. Stack Overflow answers, blog tutorials, and code snippets overwhelmingly use string concatenation for simplicity. AI models learn what they see.
- Parameterized queries add friction. Prepared statements require different syntax per database driver. AI tools often take the path of least resistance and inline the values instead.
- Context windows lose track. In a long prompt session, the model forgets earlier instructions about security. By the time it generates the search endpoint, safe patterns have fallen out of context.
Common injection patterns
SQL injection in search & filter
User input dropped straight into a SQL query via template literals. One crafted search term can dump your entire database.
XSS in user-generated content
Rendering user input as raw HTML without sanitization. An attacker can inject scripts that steal sessions or redirect users.
SSRF via URL parameters
The app fetches a URL the user provides without validating the target. Attackers point it at internal services, cloud metadata endpoints, or localhost.
Command injection in file processing
User-supplied filenames or parameters passed to shell commands. A semicolon and a few characters give attackers full control.
How Flowpatrol detects injection bugs
Flowpatrol tests your live app the way an attacker would — from the outside, with no access to source code.
- 1Discovers inputs. Flowpatrol maps every endpoint, query parameter, form field, and header your app accepts.
- 2Sends payloads. Each input gets tested with injection-specific payloads — SQL syntax, script tags, internal URLs, shell metacharacters.
- 3Analyzes responses. The scanner checks for database errors, reflected content, unexpected redirects, and timing differences that confirm the injection landed.
- 4Chains attacks. If a single injection works, Flowpatrol tests whether it can be escalated — extracting data, accessing internal services, or executing commands.
Check your app for injection flaws.
Paste your URL. Flowpatrol tests every input your app accepts and shows you exactly what's exposed.
Try it free