Most of the big Firebase breaches of the last two years were not exploits. They were settings. The rule that should have said 'only the owner can read this' said 'anyone can read this' instead, because the default is open and nobody flipped the switch. The code did exactly what the code was told.
Security misconfiguration is the category for bugs that live in settings rather than code. Open database rules, missing security headers, debug endpoints left on, wildcard CORS, permissive S3 ACLs, verbose error pages. Each one is a checkbox somebody did not untick. The code runs fine — the environment is the vulnerability.
What your AI actually built
You asked for a Firebase or Supabase backend and the model set it up. Tables exist, the frontend talks to them, reads and writes go through. When you tested as yourself, everything worked.
What the model did not do was lock the defaults. The Firestore rules allow read, write if true. Row Level Security is turned off on every table. CORS is wide open because the local dev config leaked into production. Error pages show stack traces and the framework version. Each setting is a doorway that was never closed, because none of them were code you wrote.
The same story plays out in debug endpoints that ship enabled, admin panels on a predictable path, S3 buckets with public list permission, and headers like X-Powered-By that cheerfully tell you exactly which CVE to look up. There is no bug. Just a hundred small knobs, all turned to the wrong default.