In April 2026, Cloudflare shipped Project Think and OpenAI shipped its updated Agents SDK on the same day. Both are answers to the same question: how does a long-running AI agent actually run in production. The implications for web professionals, covered in the agent runtime post on this site, are direct: the model is no longer reading your website. The runtime is. The model reads what the runtime hands it.
Three questions decide whether your page is legible to an agent runtime, and the Agent Runtime Readiness audit at /tools/agent-runtime-readiness/ tests all three on any URL.
The three tests
Test 1, server-rendered content ratio. Does your canonical content survive without JavaScript execution? Many runtimes do not run JS. Many that do run JS run it inconsistently. The test fetches the static HTML response, removes script and style tags, computes the ratio of visible text to total script content, and reports the percentage. A page that renders entirely in the static HTML scores high. A page that ships a 200KB JS bundle plus a <div id="root"> empty container scores low.
Test 2, inline JSON-LD presence. Is your structured data part of the static response, or is it injected client-side after page load? JSON-LD that gets added by client-side React after hydration is invisible to runtimes that do not execute JavaScript. The test parses the static HTML for <script type="application/ld+json"> blocks and reports the count plus the schema types detected.
Test 3, Markdown-for-Agents content negotiation. Does your host honor Accept: text/markdown and return a clean Markdown response? Cloudflare shipped Markdown for Agents in 2026 as a one-toggle implementation; OpenAI's Agents SDK and several other runtimes are increasingly likely to use the Markdown variant where available. The test sends the Accept header and inspects the Content-Type and body shape of the response.
Each test contributes ~33 points to the composite 0-100 score.
What the score actually predicts
A high score means an agent runtime can fetch your page, understand it without running JavaScript, extract your structured data, and consume the Markdown variant when one is offered. A low score means the runtime gets a partial picture, often missing the canonical content, the schema, or both. Models picking citations from runtime-extracted candidates pick the high-score pages.
The score is not a citation forecast. There are many other variables: domain authority, recency, the user's prompt, the runtime's specific configuration, whether your page survives the 499 timeout problem covered separately. The runtime readiness score addresses one specific layer of the stack and is a strong proxy for "can the runtime even read this page."
What I have seen in practice
Three patterns recur across the sites I have tested.
The first is React-shipped pages where the canonical content depends on hydration. The static HTML carries the shell, the navigation, and the footer. The article body is nowhere in the static response; it gets fetched and rendered by client-side JS. These pages score in the 20s on SSR content ratio. The fix is server-side rendering of the canonical content payload (Next.js App Router, Astro, Remix, or any framework with proper SSR support). Client-side hydration can still happen for human users; the runtime needs the bytes in the initial HTML.
The second is JSON-LD that lives in client-side schema-injection plugins. The plugin runs after page load, sees the URL, calls back to a server endpoint, and adds the appropriate schema block to the DOM. Runtimes that skip JS execution see no schema. The fix is server-render the JSON-LD or write it directly into the source.
The third is hosts that have technically excellent SSR and structured data but have not enabled Markdown for Agents (or any equivalent content negotiation). This is the easiest one to fix. On Cloudflare, it is one toggle in Caching > AI Crawl Control. The Markdown for Agents feature converts HTML to Markdown at the edge for clients sending the appropriate Accept header. The output is materially smaller than the equivalent HTML, more parseable, and easier for the model to consume. Suganthan's analysis from running the feature for 44 days documented genuine AI agents using the Markdown variant 1,421 times.
What the audit does not do
It does not run the page in a real browser. The SSR content ratio is a heuristic based on parsing the static HTML response. A page that is technically server-rendered but uses unusual layout patterns may score lower than it should. A page that is client-rendered but happens to have a lot of inline text (e.g., a noscript fallback) may score higher than it should. Use the score as an indicator and verify with View Source for any URL where the result surprises you.
It does not test authentication flow legibility. The fourth runtime question, whether your authentication is scoped to support unattended agents holding sessions across multiple calls, requires a different kind of testing. That is on the roadmap for a future tool.
It does not test agent-execution sandbox security, since that is a server-side concern not visible from outside.
Pair with the cache rule generator
The natural companion is the Cloudflare AI Cache Rule Generator. The runtime readiness audit tells you whether your pages are well-formed for runtime consumption. The cache rule generator removes the latency that would prevent runtime consumption from completing in time. Together they cover the legibility and the eligibility problems separately, which is the right factoring.
Related reading
- The Conversation Has Moved Past The Model for the deep dive on the April 2026 runtime shift
- Your Site Might Be Invisible To ChatGPT for the related eligibility layer
- The Best MCP Servers By Industry for the agent-tooling shift that the runtime layer expects to talk to
- Why ChatGPT Cites Your Competitor And Not You for the writing patterns that survive whichever runtime is reading your page
Fact-check notes and sources
- Cloudflare Project Think: Project Think announcement, InfoQ coverage
- OpenAI Agents SDK April 2026 update: The next evolution of the Agents SDK, TechCrunch coverage
- Cloudflare Markdown for Agents: docs, announcement blog, 44-day tracking analysis by Suganthan
- Cloudflare Agent Readiness score reference: blog announcement
Heuristic test based on parsing the static HTML response via jwatte's fetch-page proxy. Some hosts return different bodies to bot user agents than to humans; this tool sends a generic UA. SSR ratio is approximate. Verify with browser dev tools for any URL where the result surprises you.