Your app is careful about what the user sends it. Every field validated, every string escaped. Then you call a partner API, parse the JSON, and drop it straight into your database. The partner is a real company with a real contract. The partner's API is an attacker's input to your system, and nobody told you to treat it that way.
Unsafe Consumption of APIs is when your app treats another API's response as trusted data. You validate what your users send you, then turn around and hand the JSON from a partner feed straight to your database, your template, or your browser. The partner is not necessarily malicious — they just might be compromised, buggy, or have changed their schema this week.
What your AI actually built
You asked for an integration. Pull product data from a supplier feed. Enrich a profile with LinkedIn data. Call a weather API and cache the response. Stripe webhook. Shopify webhook. 'Login with Google.' The model happily wrote the fetch, parsed the JSON, and stored the result.
What it did not do was treat that JSON as untrusted input. It did not validate the shape, did not cap the size, did not escape the strings before rendering them, did not check the redirect, did not verify the signature. The upstream service is trusted by name. Its response is not.
The nasty version is a trusted partner whose own API is compromised — or is simply buggy. They return HTML in a field that used to be a plain string. Your app renders it. You now have XSS from a source that is not in any security tutorial, because it is not 'the user.'
How it gets exploited
An e-commerce app pulls product listings from a third-party drop-shipping API. New products appear automatically.