View source on a React SPA and you'll see a near-empty HTML document. A <div id="root"></div>, a couple of script tags, and nothing else. Every heading, every paragraph, every product description lives inside a JavaScript bundle that the browser has to download, parse, and execute before any content appears.
Google says it renders JavaScript. And it does, eventually. But "eventually" can mean hours or days, and the rendering budget isn't unlimited. Pages that depend entirely on client-side rendering sit in a queue, and some of them never make it through. Meanwhile, every AI crawler, every answer engine, and every LLM that checks your page sees the same empty div.
The hydration gap
Server-side rendering (SSR) and static site generation (SSG) solve the core problem by sending real HTML content in the initial response. But hydration, the process where client-side JavaScript takes over the server-rendered HTML, introduces its own failure modes.
The most common hydration gap is a mismatch between what the server rendered and what the client expects. The server sends HTML with a product price of $49.99. The client-side code fetches the current price, gets $52.99, and re-renders. For a brief moment, the user sees the old price flash to the new one. For a crawler that doesn't execute JavaScript, they see only the server-rendered version, which might be stale.
What the audit checks
The Prerender JS Hydration Parity tool inspects the raw HTML response before any JavaScript runs. It looks for patterns that indicate the page depends on client-side rendering:
Empty root containers. A <div id="root"></div> or <div id="app"></div> with no child content. This means the server isn't rendering anything meaningful.
Missing H1. If the primary heading only exists in a JavaScript-rendered component, crawlers that don't execute JS see a page with no heading. That's a strong negative signal for both SEO and accessibility.
Script-dependent content markers. Patterns like loading..., skeleton screens, or placeholder text that gets replaced by JavaScript. These indicate content that crawlers may see as the final version.
Hydration attribute mismatches. Framework-specific attributes (data-reactroot, data-server-rendered, ng-version) that indicate SSR was attempted but may have gaps.
The tool compares what a no-JS crawler would see against what a full browser sees, and flags every piece of content that only exists in the JavaScript-rendered version.
Why this matters for AI search
Googlebot has a rendering service. Most AI crawlers don't. When Perplexity, Claude, or ChatGPT checks your page, they're often working from the raw HTML response. If your content is locked behind JavaScript execution, these systems either skip your site or work from whatever fragment they can extract.
This is the same problem RSS readers had ten years ago and email clients have today. Any channel that consumes your content without running your full JavaScript stack needs the content to be in the HTML. SSR, SSG, or prerendering solves this. Client-side-only rendering guarantees that some percentage of your potential audience sees nothing.
If you're choosing a tech stack for a new site and want to avoid these problems from the start, The $97 Launch ($9.99 on Kindle) covers how to pick an architecture that serves content without requiring client-side rendering.
Fact-check notes and sources
- Google's JavaScript rendering is a two-phase process: indexing the raw HTML first, then queuing pages for rendering via the Web Rendering Service (WRS). Source: Google Search Central, "Understand the JavaScript SEO basics."
- Google's Martin Splitt has confirmed that rendering is resource-constrained and pages may wait in the render queue. Source: Google Search Central YouTube, "JavaScript SEO" series.
- Chrome's Headless mode is used by Google's WRS, based on an evergreen version of Chromium. Source: Google Web Developers documentation.
Related reading
- Render block audit — finding scripts that block rendering entirely
- Passage retrieval and SEO — how search engines pick passages from your content
- RAG readiness audit — making your content parseable for AI retrieval systems
- Chunk retrievability — how AI systems break your content into citable sections
This post is informational, not SEO-consulting advice. Mentions of Google, React, Perplexity, and other third parties are nominative fair use. No affiliation is implied.