← Back to Blog

Thirteen Probes For The Agent-Ready Web, And A Browser-Side Tool That Runs All Of Them On Any URL

Thirteen Probes For The Agent-Ready Web, And A Browser-Side Tool That Runs All Of Them On Any URL

The Agent Readiness Score that Cloudflare published in May 2026, mirrored by isitagentready.com, takes a single URL and grades it across five categories that an agent runtime cares about. Discoverability. Content. Bot Access Control. API and Auth and MCP and Skills. Commerce. Each category contains two to six discrete probes. Twelve of the thirteen are scored, one (Web Bot Auth) is informational, and four of the commerce probes are informational because they only matter if you sell to agents at all.

I built a browser-side tool that runs the same thirteen probes against any URL you paste into it. The tool lives at /tools/agent-ready-audit/. The results match Cloudflare's category breakdown closely enough that you can use them as a pre-flight check before pointing isitagentready.com at the same URL, and the per-probe Fix prompt the tool emits gives you a paste-ready remediation list ranked by impact.

This post explains what each of the thirteen probes actually tests, how the score is computed, and which findings are worth fixing on a read-only content site versus which ones are honestly absent and should be left alone.

Why this audit exists in May 2026

In April 2026, Cloudflare announced Project Think and OpenAI shipped its updated Agents SDK on the same day. Both reframe the agent as a long-running runtime, not a chat conversation. The runtime fetches your page, parses it, maybe executes some JavaScript, maybe negotiates a Markdown variant, maybe finds an MCP Server Card, maybe finds an Agent Skills index, and decides what to put in front of the model. The model picks citations from whatever the runtime hands it.

That is the shift the agent runtime post covers in depth. The implication for your site is that you are no longer being graded by ChatGPT or Claude. You are being graded by the runtime that fetches you on behalf of an agent that ChatGPT or Claude is operating inside. The runtime cares about machine-readable signals: well-known endpoints, response headers, content negotiation, structured manifests. None of that was in the SEO playbook eighteen months ago. Most of it is still missing on most sites.

Cloudflare's Agent Readiness Score is the cleanest single articulation of that surface I have seen. The Agent Ready Audit at /tools/agent-ready-audit/ is the browser-side implementation that lets you measure it on demand without waiting on a third-party scan queue.

The thirteen probes, in order

The five categories and the probes inside them:

Discoverability. Three probes. robots.txt (RFC 9309), Sitemap discovery, Link response headers (RFC 8288). Together they answer the question "can an agent find your discovery resources without parsing your HTML."

Content. One probe. Markdown content negotiation. Does your origin honor Accept: text/markdown and return a Markdown variant? Cloudflare ships this as a one-toggle feature; origin servers can implement it directly.

Bot Access Control. Three probes. AI bot rules in robots.txt (per-bot User-agent directives for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, etc.), Content Signals (the contentsignals.org / IETF draft directive declaring AI-usage preferences), Web Bot Auth (informational, only relevant if you operate a friendly bot).

API / Auth / MCP / Skills. Six probes. API Catalog (RFC 9727), OAuth / OIDC discovery (RFC 8414), OAuth Protected Resource (RFC 9728), MCP Server Card (SEP-2127), Agent Skills index (v0.2.0), WebMCP. This is the densest category and the one most sites are weakest on, because every one of these is a specification that landed inside the last twelve months.

Commerce. Four probes, all informational. x402 (HTTP 402 Payment Required), MPP (Machine Payment Protocol), UCP (Universal Commerce Protocol), ACP (Agentic Commerce Protocol). These only matter if you sell to agents. A read-only content site honestly skips this entire category and the score reflects that.

Twelve of the thirteen are scored. The score is passed / scored * 100 rounded to a whole number. Levels: 80+ is Agent-Native, 60-79 is Agent-Friendly, 40-59 is Agent-Aware, 20-39 is Discoverable, anything lower is Basic Web Presence.

What each probe actually checks

The full per-probe documentation lives on the dedicated reference page at /tools/agent-ready-audit-reference/, one Fix pill and one Learn pill per check, just like the cross-audit pattern the rest of the site uses. Quick highlights for the probes that catch most sites by surprise:

Link response headers. Most sites do not send Link: response headers pointing to discovery resources. Adding them is a one-line _headers change on Netlify, a one-line nginx config, a one-line Apache config. The tool shows you exactly what to add.

Markdown content negotiation. On Cloudflare you flip one toggle in Caching > AI Crawl Control. On origin servers you wire content negotiation against Accept: text/markdown. The Cloudflare implementation alone closed thousands of agent fetches per week on early-adopter sites in May 2026, per the public tracking analyses.

Content Signals. A single line in robots.txt: Content-Signal: search=yes, ai-input=yes, ai-train=yes. Adjust the values to your policy. Cloudflare's AI Crawl Control adopted this as the default response in early 2026.

API Catalog (RFC 9727). A static application/linkset+json document at /.well-known/api-catalog. Tells agents where your OpenAPI specs, service docs, and status endpoints live without parsing markup. Almost no content sites publish this, but it is a small file and costs nothing to add.

MCP Server Card (SEP-2127). A JSON manifest at /.well-known/mcp/server-card.json declaring serverInfo, transports, and capabilities. If you operate an MCP server (or are planning to), this is the discovery layer. The spec landed in late 2025 and is still settling, but the path is stable.

Agent Skills index (v0.2.0). A JSON manifest at /.well-known/agent-skills/index.json listing the SKILL.md files describing what an agent can do with your site. Each entry has name, type, description, url, and optionally a SHA-256 digest. The pattern was promoted by Anthropic and other agent operators through 2026.

WebMCP. A snippet of homepage JavaScript that calls navigator.modelContext.provideContext({tools:[...]}) on page load. Browser-side AI agents pick up the tool registrations at runtime. The spec is a W3C Community Draft as of mid-2026; the tool currently checks for the API call or the imperative-init marker on your homepage HTML.

Score interpretation: what is honestly absent

A static read-only content site can hit Agent-Native (80+) without implementing protected-API auth or commerce. The score model treats the four commerce protocols as informational and the Web Bot Auth probe as informational, so you are scored on twelve probes. Of those twelve, OAuth metadata and OAuth Protected Resource will both honestly fail on a site that does not expose protected APIs. That is fine. The tool's per-probe Fix prompt explicitly says "skip unless you have protected APIs" for those two.

What separates an Agent-Friendly score from an Agent-Native score on a content site is usually three things: Link response headers, Markdown content negotiation, and an MCP Server Card. All three are publishable in a single afternoon. The API Catalog is a fourth that takes another hour. After that you are at five out of six in the API/Auth/MCP/Skills category and the only non-pass is the OAuth metadata, which is correctly absent.

How to use the tool in a workflow

Paste a URL. Click run. Wait about twelve seconds while it runs all thirteen probes through the jwatte.com fetch-page proxy (which carries an Origin allow-list and per-IP rate limit so the function does not get hammered). The score and per-category breakdown render at the top, the per-probe findings render below, and the Copy AI fix prompt button at the top right emits a paste-ready Markdown brief that lists every failed probe with its Fix line, ready to drop into Claude or ChatGPT to generate the actual file diffs.

For deeper investigation of any individual finding, the per-finding pills route you to the dedicated specialist tool. The robots.txt finding pills into the AI Posture Audit and the AI Bot Policy Generator. The MCP Server Card finding pills into the MCP Server Audit. The WebMCP finding pills into the WebMCP Readiness Checker. The reference page at /tools/agent-ready-audit-reference/ documents every pill route in one place so you can see the entire surface at a glance before paying attention to a specific URL.

What the audit does not do

It does not make an agent actually fetch your site under its real user-agent. The probes go through a generic UA via the Netlify fetch-page proxy. Some hosts return different bodies to bots versus humans, and the audit cannot detect that. If your score on this tool disagrees with the score on isitagentready.com (which uses different infrastructure and a different UA), the most likely cause is a bot-detection middleware that blocks one of the two requesters.

It does not validate the schema content of the manifests it finds. A 200 OK response on /.well-known/mcp/server-card.json with a malformed serverInfo object will pass the discovery probe and fail downstream when an actual agent tries to use it. The dedicated MCP Server Audit at /tools/mcp-server-audit/ does the deeper schema validation and is the natural follow-up.

It does not measure rate-limit behavior or backpressure. The companion /tools/agent-rate-limit-probe/ handles that surface and is worth running on any site that operates an actual API rather than just publishing content.

For small businesses self-marketing into AI search

If you are running your own site and trying to be findable by ChatGPT, Perplexity, Claude, and Gemini without paying an agency two hundred a month to fiddle with knobs, the Agent Ready Audit is the cheapest single read on whether you are even visible to the runtimes that pick what those models cite. The Discoverability and Bot Access categories cover the same ground a good SEO audit covers; the API/Auth/MCP/Skills category is the new surface most agencies have not yet learned to ask about. The audit takes ten seconds. Acting on the findings takes a long afternoon. The cumulative result is your site moves from "the runtime cannot find anything specific about it" to "the runtime can pull a Markdown variant of your page, find your MCP server, and route an agent to your skills." Search "The $20 Dollar Agency" on Amazon Kindle for the longer playbook on building that whole AEO/GEO stack on twenty dollars a month of AI tooling.

Related reading

Fact-check notes and sources

Heuristic remote audit. Probes run via the jwatte.com fetch-page proxy with an Origin allow-list and per-IP rate limit. Some hosts return different bodies to bots than to humans; this tool sends a generic UA. Score is point-in-time; some probes (Web Bot Auth, OAuth) are honestly absent on read-only static sites and should not be remediated. This post is informational, not legal or SEO-consulting advice. Mentions of Cloudflare, OpenAI, isitagentready.com, and the named protocols are nominative fair use.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026