Shipping an AI feature used to mean training a model. Now it means pulling one off a hub, grabbing a dataset somebody else prepared, and clicking a plugin into your agent. Every one of those is a door, and every door came from a stranger.
LLM supply chain risk is everything your app imports on the way to inference — base models, fine-tunes, datasets, plugins, adapters, embeddings. Each one is code or data from someone else, pulled over the internet, usually with no signature. Any of them can ship a surprise.
What your AI actually built
You downloaded a fine-tuned model from a public hub because it was three points better on the benchmark you cared about. You pip-installed a LangChain plugin that promised to scrape PDFs. You grabbed a dataset for evals from a public repo.
All three were the right call for shipping fast. None of them came with a signature, a provenance trail, or a meaningful review. The hub showed a download count and a thumbs-up — that was the full trust story.
The model might have a backdoor trigger phrase. The plugin might exfiltrate every document it touches. The dataset might be poisoned to teach your fine-tune the exact wrong answer to the one question that matters. You wouldn't know unless you looked, and there isn't a lint rule for this yet.
How it gets exploited
A startup fine-tunes a base model from a public hub for a legal research tool, then wires in a PDF plugin.
Know what your AI app is actually loading.
Flowpatrol inspects your model and plugin supply chain and flags the risky links before they ship.
Try it free