← Tools

Agent Ready Audit — Reference

Every sub-routine the Agent Ready Audit runs, in one place. 13 probes across 5 categories: Discoverability, Content, Bot Access Control, API / Auth / MCP / Skills, and Commerce. Each probe documents what it tests, what passes, the suggested fix, and the dedicated specialist tool plus deep-dive blog post.

Run the audit → · Why this tool exists → · Runtime readiness companion →

How to read this page

The audit at /tools/agent-ready-audit/ mirrors the Cloudflare / isitagentready.com Agent Readiness Score. It runs 13 probes and reports a 0–100 score from the 12 probes that are scored (Web Bot Auth and the four commerce probes are informational). Levels: 80+ Agent-Native, 60-79 Agent-Friendly, 40-59 Agent-Aware, 20-39 Discoverable, lower than 20 Basic Web Presence.

Each probe below has three pills:

Some probes also expose a ☷ Spec pill linking to the underlying RFC, IETF draft, or W3C document for cases where you want to read the source.

Table of contents

  1. Discoverability — robots.txt, Sitemap, Link headers (3 probes)
  2. Content — Markdown content negotiation (1 probe)
  3. Bot Access Control — AI bot rules, Content Signals, Web Bot Auth (3 probes)
  4. API / Auth / MCP / Skills — API Catalog, OAuth, OAuth PR, MCP Server Card, Agent Skills, WebMCP (6 probes)
  5. Commerce (informational) — x402, MPP, UCP, ACP (4 probes)
Discoverability

Three probes — can an agent find your discovery resources without parsing your HTML?

robots.txt (RFC 9309) scored
The first file any crawler reads. Must return HTTP 200 with text/plain and contain at least one valid User-agent: directive parseable per RFC 9309.
Path: /robots.txt
Fix: Publish /robots.txt with at minimum User-agent: * and Allow: / (or per-bot rules). Serve as text/plain, return 200.
Sitemap discoverable scored
Either a Sitemap: directive in robots.txt, OR a reachable /sitemap.xml at the apex that parses as <urlset> or <sitemapindex>. Sitemaps power agent crawl planning and let large-corpus operators avoid blanket-fetching every URL on a host.
Paths: Sitemap: line in /robots.txt · or /sitemap.xml
Fix: Add Sitemap: https://yourdomain.com/sitemap.xml to robots.txt and publish a valid XML sitemap.
Link response headers (RFC 8288) scored
HTTP Link: response headers point agents to discovery resources without parsing HTML. Recommended rel values: api-catalog, mcp-server-card, agent-skills, llms-txt, sitemap.
Read from: homepage response headers
Fix: Add Link headers via Netlify _headers / netlify.toml (or your origin's response config). Format: </.well-known/api-catalog>; rel="api-catalog"; type="application/linkset+json" — one per resource you want discoverable.
Content

One probe — will the host serve a Markdown variant when an agent asks for it?

Markdown content negotiation scored
When an agent sends Accept: text/markdown, the server returns a Markdown variant of the page. The probe sends the Accept header through the proxy, inspects the returned Content-Type, and checks the body for Markdown shape (frontmatter or H1 prefix). Cloudflare's Markdown for Agents and equivalent origin-side negotiation both qualify.
Read from: homepage Content-Type + body shape on Accept: text/markdown
Fix: Enable Cloudflare Markdown for Agents (one toggle in Caching > AI Crawl Control) OR implement an origin endpoint that responds to Accept: text/markdown with a Markdown variant and Vary: Accept.
Bot Access Control

Three probes — can your robots.txt clearly tell each AI bot what is allowed?

AI bot rules in robots.txt scored
Explicit per-bot User-agent: directives for known AI crawlers: GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, anthropic-ai, PerplexityBot, Google-Extended, Applebot-Extended, CCBot, Bytespider, Meta-ExternalAgent, FacebookBot, Cohere-AI, Amazonbot, YouBot, Diffbot, Mistralai-User, etc. An Allow rule counts — silence does not.
Read from: /robots.txt (parsed for known AI-bot UAs)
Fix: Add explicit User-agent blocks for each AI bot you want to allow or disallow. Use the AI Bot Policy Generator to produce a paired robots.txt + ai.txt from your policy.
Content Signals in robots.txt (contentsignals.org) scored
A structured directive in robots.txt declaring AI usage preferences: Content-Signal: search=yes, ai-input=yes, ai-train=yes. Adopted by Cloudflare AI Crawl Control as the default response. Without it, agents fall back to inferring intent from Allow/Disallow alone.
Read from: any Content-Signal: line in /robots.txt
Fix: Add Content-Signal: search=yes, ai-input=yes, ai-train=yes to your User-agent: * block (adjust values to your policy — yes, no, or omit per signal).
Web Bot Auth signature directory informational
An IETF draft for cryptographically authenticating bots via HTTP Message Signatures. Public keys live at /.well-known/http-message-signatures-directory. Mainly relevant if you operate a friendly bot; absence on a content site is fine and does not affect score.
Path: /.well-known/http-message-signatures-directory
Fix: Required only if you operate a bot. Read-only content sites can leave this absent without penalty.
API / Auth / MCP / Skills

Six probes — the densest category and the one most sites are weakest on

API Catalog (RFC 9727) scored
A static application/linkset+json document with a linkset array. Lets agents discover OpenAPI specs, service docs, status endpoints, and other API artifacts without parsing markup. Each linkset entry needs anchor plus link relations like service-desc, service-doc, status.
Path: /.well-known/api-catalog
Fix: Publish /.well-known/api-catalog as application/linkset+json with a linkset array. RFC 9727 Appendix A has worked examples.
OAuth / OIDC discovery (RFC 8414) scored
Either /.well-known/oauth-authorization-server (RFC 8414) or /.well-known/openid-configuration (OpenID Connect Discovery 1.0). Required only if your APIs are protected; intentionally absent on read-only public sites is honest and the audit's Fix copy says so.
Paths: /.well-known/oauth-authorization-server · /.well-known/openid-configuration
Fix: If you have protected APIs, publish issuer + endpoints per RFC 8414 (or run an OIDC provider). Read-only public sites should leave both absent.
OAuth Protected Resource (RFC 9728) scored
/.well-known/oauth-protected-resource. Tells agents which authorization server issues tokens for this resource. Static read-only sites can skip honestly.
Path: /.well-known/oauth-protected-resource
Fix: Required only for protected APIs. Read-only static sites can leave this absent.
MCP Server Card (SEP-2127) scored
/.well-known/mcp/server-card.json. Declares serverInfo (name, version), one or more transports (http-stream, websocket, webmcp), and capabilities (tools, resources, prompts). The audit also tries legacy paths /.well-known/mcp.json and /.well-known/mcp/server-cards.json.
Paths: /.well-known/mcp/server-card.json (preferred) · /.well-known/mcp.json · /.well-known/mcp/server-cards.json
Fix: Publish /.well-known/mcp/server-card.json with serverInfo, transports, and capabilities. The MCP Server Recommender suggests transports and capability blocks based on what your site actually exposes.
Agent Skills index (v0.2.0) scored
/.well-known/agent-skills/index.json. Lists SKILL.md files describing what an agent can do with your site. Each entry has name, type, description, url, and optionally a SHA-256 digest. The audit also accepts the legacy /.well-known/skills/index.json path.
Paths: /.well-known/agent-skills/index.json (preferred) · /.well-known/skills/index.json (legacy)
Fix: Publish /.well-known/agent-skills/index.json per the v0.2.0 spec. Each skill entry needs name, type, description, url, optional sha256.
WebMCP (W3C Community Draft) scored
JS code that calls navigator.modelContext.provideContext() on page load to register tools with browser-side AI agents. The probe inspects the homepage HTML for the API call (navigator.modelContext or provideContext() or the imperative-init marker (data-webmcp-imperative).
Read from: homepage HTML (script content + attributes)
Fix: Add a script that calls navigator.modelContext.provideContext({tools:[...]}) on page load. Each tool needs name, description, inputSchema, and an execute callback. The WebMCP Readiness Checker generates the skeleton.
Commerce

Four probes, all informational — only relevant if you sell to agents

Score note: all four commerce probes are informational. They appear in the per-category breakdown but do not affect the 0–100 score. A read-only content site honestly skips this entire category.
x402 (HTTP 402 Payment Required) informational
Coinbase-led protocol for agent-native HTTP payments. Protected routes return HTTP 402 with payment requirements that compatible agents fulfill automatically. The probe checks /api and /api/v1 for a 402 response.
Read from: /api · /api/v1
Fix: Only relevant if you charge agents per-request. Implement payment-required gating per the x402 spec on routes you want monetized.
MPP (Machine Payment Protocol) informational
An /openapi.json document with x-payment-info extensions on payable operations. The probe fetches /openapi.json, parses it, and checks for the x-payment-info extension anywhere in the document.
Path: /openapi.json (with x-payment-info extension)
Fix: Skip unless you sell agent-callable services. If you do, annotate payable operations in your OpenAPI document with x-payment-info.
UCP (Universal Commerce Protocol) informational
/.well-known/ucp profile. Co-developed by Google, Shopify, and Etsy as a capability profile for agent-driven commerce.
Path: /.well-known/ucp
Fix: Skip unless you implement Universal Commerce. Publish a UCP capability profile per the spec if you do.
ACP (Agentic Commerce Protocol) informational
/.well-known/acp.json. Stripe + OpenAI specification for agent-driven checkout. The probe fetches the JSON and checks for a protocol object with a name field.
Path: /.well-known/acp.json
Fix: Skip unless you sell via Stripe / OpenAI ACP. Publish the JSON document per the ACP spec if you do.

Score model in plain English

Twelve probes count toward the score. Web Bot Auth is informational and the four commerce probes are informational, so they appear in the per-category breakdown but don't move the number. The total is passed_scored / 12 × 100, rounded to a whole number. Per-category percentages are passed / total_in_category.

Levels: 80+ Agent-Native, 60-79 Agent-Friendly, 40-59 Agent-Aware, 20-39 Discoverable, lower than 20 Basic Web Presence. The model matches the public Cloudflare / isitagentready.com framing closely; minor differences come from probe-by-probe pass criteria (which the per-probe sections above document).

What the audit cannot measure from the outside

Disclaimer

Probes run via the jwatte.com fetch-page proxy with an Origin allow-list and per-IP rate limit. Some hosts return different bodies to bots vs humans; this tool sends a generic UA. Score is point-in-time; some probes (Web Bot Auth, OAuth) are honestly absent on read-only static sites and should not be remediated. Mentions of Cloudflare, isitagentready.com, OpenAI, Stripe, Coinbase, Google, Shopify, Etsy, and the named protocols are nominative fair use; no affiliation implied.