Everyone knows about SQL injection. Nobody writes it on purpose anymore. So how does it keep shipping? Because the model wrote a perfectly normal-looking search endpoint, and somewhere in the middle of that endpoint there is a template string that should have been a parameter.
Injection happens when user input gets treated as code instead of data. SQL injection is the famous one, but the family is larger: XSS in the browser, template injection in server-rendered views, command injection in a shell call, NoSQL injection in a Mongo filter. They all share a single root cause — data and instructions sharing the same channel.
What your AI actually built
You asked for a search endpoint. The model gave you one: take a query string, filter the users table, return the matches. Works on the happy path. Looks like every tutorial you have ever read.
What it did not do was bind the parameter. The search term gets concatenated into the SQL directly, because string interpolation reads naturally and the model has seen a thousand examples that look identical. The ORM call right next to it uses prepared statements — but this one raw query slipped through.
The same pattern lives one layer up in the frontend, where a comment renders through dangerouslySetInnerHTML. And one layer down in a job runner that passes a filename into exec without quoting. Different syntax, same bug: data treated as code.
How it gets exploited
The attacker finds a search box on the public site. They type a single quote and press enter.