The model is cheerful, fluent, and confidently wrong. That last part is the bug. Builders spend weeks polishing prompts so the answer sounds right and zero minutes making sure it is right — because the interface looks like search, not like a guess.
Misinformation is what happens when an LLM feature produces confident output that is simply wrong — fabricated APIs, invented citations, made-up facts. It is not a 'model problem' you can wait out. It is an application problem: you put a probabilistic writer in a slot where users expected a source of truth.
What your AI actually built
You built a feature that answers questions: legal terms, medication dosages, code snippets, citations, whatever the domain is. The bot replies in paragraphs, includes numbers, cites sources. It feels authoritative. That is the problem.
Hallucinations are not bugs in the model — they are the default output mode when the model runs out of real information. Your users cannot tell the difference between a grounded answer and a fluent guess. They were never asked to.
The fix is not 'prompt the model to be more careful.' The fix is a pipeline: retrieve real sources, constrain the output to those sources, and refuse when there is nothing to ground on. The model is a writer, not an oracle.
How it gets exploited
A dev-tools assistant that generates code, answers library questions, and confidently names packages it thinks exist.