OWASP's guide to the ten most critical risks in LLM-powered apps — RAG pipelines, agents, copilots, and chat interfaces. Grounded in community research and written for developers building on top of foundation models.
The bug where a user message overrides your system prompt and the model happily goes along with it.
Read the storyThe bug where your AI cheerfully reveals things it was trained on, retrieved, or told in confidence.
Read the storyThe bug where the model, dataset, or plugin you pulled from a hub wasn't quite what it said it was.
Read the storyThe bug where someone slipped bad examples into your training or retrieval data and the model learned them.
Read the storyThe bug where you trust the model output enough to render it, run it, or put it in a SQL query.
Read the storyThe bug where the agent can do far more than it needs to — and eventually, it does.
Read the storyThe bug where your secret system prompt turns out to be whatever the model felt like saying that day.
Read the storyThe bug where a shared vector store treats every user as the same user.
Read the storyThe bug where a confident, well-formatted answer turns out to be entirely made up.
Read the storyThe bug where your chatbot turns into somebody else's free inference endpoint.
Read the storyFlowpatrol tests every category on this list — and proves every finding with a real exploit. Paste a URL, get a report in minutes.