Every sub-routine the Agent Ready Audit runs, in one place. 13 probes across 5 categories: Discoverability, Content, Bot Access Control, API / Auth / MCP / Skills, and Commerce. Each probe documents what it tests, what passes, the suggested fix, and the dedicated specialist tool plus deep-dive blog post.
Run the audit → · Why this tool exists → · Runtime readiness companion →
The audit at /tools/agent-ready-audit/ mirrors the Cloudflare / isitagentready.com Agent Readiness Score. It runs 13 probes and reports a 0–100 score from the 12 probes that are scored (Web Bot Auth and the four commerce probes are informational). Levels: 80+ Agent-Native, 60-79 Agent-Friendly, 40-59 Agent-Aware, 20-39 Discoverable, lower than 20 Basic Web Presence.
Each probe below has three pills:
Some probes also expose a ☷ Spec pill linking to the underlying RFC, IETF draft, or W3C document for cases where you want to read the source.
text/plain and contain at least one valid User-agent: directive parseable per RFC 9309./robots.txt/robots.txt with at minimum User-agent: * and Allow: / (or per-bot rules). Serve as text/plain, return 200.Sitemap: directive in robots.txt, OR a reachable /sitemap.xml at the apex that parses as <urlset> or <sitemapindex>. Sitemaps power agent crawl planning and let large-corpus operators avoid blanket-fetching every URL on a host.Sitemap: line in /robots.txt · or /sitemap.xmlSitemap: https://yourdomain.com/sitemap.xml to robots.txt and publish a valid XML sitemap.Link: response headers point agents to discovery resources without parsing HTML. Recommended rel values: api-catalog, mcp-server-card, agent-skills, llms-txt, sitemap._headers / netlify.toml (or your origin's response config). Format: </.well-known/api-catalog>; rel="api-catalog"; type="application/linkset+json" — one per resource you want discoverable.Accept: text/markdown, the server returns a Markdown variant of the page. The probe sends the Accept header through the proxy, inspects the returned Content-Type, and checks the body for Markdown shape (frontmatter or H1 prefix). Cloudflare's Markdown for Agents and equivalent origin-side negotiation both qualify.Content-Type + body shape on Accept: text/markdownAccept: text/markdown with a Markdown variant and Vary: Accept.User-agent: directives for known AI crawlers: GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, anthropic-ai, PerplexityBot, Google-Extended, Applebot-Extended, CCBot, Bytespider, Meta-ExternalAgent, FacebookBot, Cohere-AI, Amazonbot, YouBot, Diffbot, Mistralai-User, etc. An Allow rule counts — silence does not./robots.txt (parsed for known AI-bot UAs)User-agent blocks for each AI bot you want to allow or disallow. Use the AI Bot Policy Generator to produce a paired robots.txt + ai.txt from your policy.Content-Signal: search=yes, ai-input=yes, ai-train=yes. Adopted by Cloudflare AI Crawl Control as the default response. Without it, agents fall back to inferring intent from Allow/Disallow alone.Content-Signal: line in /robots.txtContent-Signal: search=yes, ai-input=yes, ai-train=yes to your User-agent: * block (adjust values to your policy — yes, no, or omit per signal)./.well-known/http-message-signatures-directory. Mainly relevant if you operate a friendly bot; absence on a content site is fine and does not affect score./.well-known/http-message-signatures-directoryapplication/linkset+json document with a linkset array. Lets agents discover OpenAPI specs, service docs, status endpoints, and other API artifacts without parsing markup. Each linkset entry needs anchor plus link relations like service-desc, service-doc, status./.well-known/api-catalog/.well-known/api-catalog as application/linkset+json with a linkset array. RFC 9727 Appendix A has worked examples./.well-known/oauth-authorization-server (RFC 8414) or /.well-known/openid-configuration (OpenID Connect Discovery 1.0). Required only if your APIs are protected; intentionally absent on read-only public sites is honest and the audit's Fix copy says so./.well-known/oauth-authorization-server · /.well-known/openid-configuration/.well-known/oauth-protected-resource. Tells agents which authorization server issues tokens for this resource. Static read-only sites can skip honestly./.well-known/oauth-protected-resource/.well-known/mcp/server-card.json. Declares serverInfo (name, version), one or more transports (http-stream, websocket, webmcp), and capabilities (tools, resources, prompts). The audit also tries legacy paths /.well-known/mcp.json and /.well-known/mcp/server-cards.json./.well-known/mcp/server-card.json (preferred) · /.well-known/mcp.json · /.well-known/mcp/server-cards.json/.well-known/mcp/server-card.json with serverInfo, transports, and capabilities. The MCP Server Recommender suggests transports and capability blocks based on what your site actually exposes./.well-known/agent-skills/index.json. Lists SKILL.md files describing what an agent can do with your site. Each entry has name, type, description, url, and optionally a SHA-256 digest. The audit also accepts the legacy /.well-known/skills/index.json path./.well-known/agent-skills/index.json (preferred) · /.well-known/skills/index.json (legacy)/.well-known/agent-skills/index.json per the v0.2.0 spec. Each skill entry needs name, type, description, url, optional sha256.navigator.modelContext.provideContext() on page load to register tools with browser-side AI agents. The probe inspects the homepage HTML for the API call (navigator.modelContext or provideContext() or the imperative-init marker (data-webmcp-imperative).navigator.modelContext.provideContext({tools:[...]}) on page load. Each tool needs name, description, inputSchema, and an execute callback. The WebMCP Readiness Checker generates the skeleton./api and /api/v1 for a 402 response./api · /api/v1/openapi.json document with x-payment-info extensions on payable operations. The probe fetches /openapi.json, parses it, and checks for the x-payment-info extension anywhere in the document./openapi.json (with x-payment-info extension)x-payment-info./.well-known/ucp profile. Co-developed by Google, Shopify, and Etsy as a capability profile for agent-driven commerce./.well-known/ucp/.well-known/acp.json. Stripe + OpenAI specification for agent-driven checkout. The probe fetches the JSON and checks for a protocol object with a name field./.well-known/acp.jsonTwelve probes count toward the score. Web Bot Auth is informational and the four commerce probes are informational, so they appear in the per-category breakdown but don't move the number. The total is passed_scored / 12 × 100, rounded to a whole number. Per-category percentages are passed / total_in_category.
Levels: 80+ Agent-Native, 60-79 Agent-Friendly, 40-59 Agent-Aware, 20-39 Discoverable, lower than 20 Basic Web Presence. The model matches the public Cloudflare / isitagentready.com framing closely; minor differences come from probe-by-probe pass criteria (which the per-probe sections above document).
/.well-known/mcp/server-card.json with a malformed body passes the discovery probe and fails downstream when an actual agent tries to use it. Run the dedicated MCP Server Audit for schema validation.Probes run via the jwatte.com fetch-page proxy with an Origin allow-list and per-IP rate limit. Some hosts return different bodies to bots vs humans; this tool sends a generic UA. Score is point-in-time; some probes (Web Bot Auth, OAuth) are honestly absent on read-only static sites and should not be remediated. Mentions of Cloudflare, isitagentready.com, OpenAI, Stripe, Coinbase, Google, Shopify, Etsy, and the named protocols are nominative fair use; no affiliation implied.