The model is smart. The model is helpful. The model is also a stranger typing things into your app. The second you render its text as HTML, run it as a shell command, or paste it into a SQL query, you're letting that stranger write code in your runtime.
Improper output handling is the bug where the model's text is used by the next system without being treated as untrusted input. Render it as HTML and you get XSS. Shell-exec it and you get RCE. Put it in a SQL string and you get SQL injection. The class is old; the input is new.
What your AI actually built
You built a feature where the model returns something and your code uses it. A chatbot that replies with Markdown. An agent that generates a shell command. A natural-language query tool that produces SQL on the fly. These are the shapes every AI app eventually takes.
In every case, there's a line in your code that takes the model's output and hands it to something that executes. innerHTML. exec. db.query. The model is treated as a trusted coauthor.
It isn't. Its output is a function of whatever went in, which includes whatever your user typed, which includes whatever an attacker typed. The model can be coaxed into producing a <script> tag, a rm -rf, or a DROP TABLE — and your code will happily run it.
How it gets exploited
A marketing site has an AI chatbot whose replies are rendered as Markdown in the page.