• Agents
  • Pricing
  • Blog
Log in
Get started

Security for apps built with AI. Paste a URL, get a report, fix what matters.

Product

  • How it works
  • What we find
  • Pricing
  • Agents
  • MCP Server
  • CLI
  • GitHub Action

Resources

  • Guides
  • Blog
  • Docs
  • OWASP Top 10
  • Glossary
  • FAQ

Security

  • Supabase Security
  • Next.js Security
  • Lovable Security
  • Cursor Security
  • Bolt Security

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
  • Imprint
© 2026 Flowpatrol. All rights reserved.
OWASP Top 10 for LLM Applications · 2025

Ten patterns.
Sixty seconds each.

OWASP's guide to the ten most critical risks in LLM-powered apps — RAG pipelines, agents, copilots, and chat interfaces. Grounded in community research and written for developers building on top of foundation models.

Web Top 10LLM Top 10API Top 10
LLM01

The "ignore your instructions" bug

Prompt Injection

The bug where a user message overrides your system prompt and the model happily goes along with it.

Read the story
LLM02

The "my bot knows too much" bug

Sensitive Information Disclosure

The bug where your AI cheerfully reveals things it was trained on, retrieved, or told in confidence.

Read the story
LLM03

The "I npm installed a model" bug

Supply Chain

The bug where the model, dataset, or plugin you pulled from a hub wasn't quite what it said it was.

Read the story
LLM04

The "it learned the wrong thing on purpose" bug

Data and Model Poisoning

The bug where someone slipped bad examples into your training or retrieval data and the model learned them.

Read the story
LLM05

The "I piped the model straight into eval" bug

Improper Output Handling

The bug where you trust the model output enough to render it, run it, or put it in a SQL query.

Read the story
LLM06

You gave the agent the keys to the whole house

Excessive Agency

The bug where the agent can do far more than it needs to — and eventually, it does.

Read the story
LLM07

The 'just ask it for its instructions' bug

System Prompt Leakage

The bug where your secret system prompt turns out to be whatever the model felt like saying that day.

Read the story
LLM08

The 'my RAG bot answers everyone with everyone's data' bug

Vector and Embedding Weaknesses

The bug where a shared vector store treats every user as the same user.

Read the story
LLM09

The 'the model was wrong and the user trusted it' bug

Misinformation

The bug where a confident, well-formatted answer turns out to be entirely made up.

Read the story
LLM10

The 'I woke up to a $40,000 OpenAI bill' bug

Unbounded Consumption

The bug where your chatbot turns into somebody else's free inference endpoint.

Read the story

Scan for all ten.

Flowpatrol tests every category on this list — and proves every finding with a real exploit. Paste a URL, get a report in minutes.

Try it freeWhat we find