Agents are magic right up until someone asks them to do something they should not. The model is happy to help. The tools are happy to comply. Nothing in between is checking whether any of this was ever supposed to happen.
Excessive Agency is what happens when an LLM-driven agent has more tools, more permissions, or more autonomy than the task actually requires. The model is allowed to act on behalf of the user — but 'the user' has been quietly replaced with 'anyone whose text ends up in the prompt.'
What your AI actually built
You wanted an assistant that could read a user's calendar and draft replies. The easiest way to get that working was to hand it a service account with full read/write on the mailbox, the calendar, the files, and the contacts. It worked on the first try.
The model now has more permission than any human on the team. Every tool call it makes runs as 'the assistant,' and the assistant can do anything. There is no per-user scope, no per-action confirmation, no allow-list of safe verbs.
Agency creeps in three directions at once: too many tools, too many permissions per tool, and too much autonomy to call them without asking. Any one of those is fine. All three together is a blast radius.
How it gets exploited
A support bot is wired into the company Slack, the CRM, and the billing system through a single shared API key.