# The Cloudflare Agent Readiness Score and What It Actually Checks

isitagentready.com tests for 13+ signals across discoverability, content, bot access, and capabilities. Here&#39;s what each one does, what&#39;s worth shipping, and what&#39;s optional for most sites.

Author: J.A. Watte
Published: May 10, 2026
Source: https://jwatte.com/blog/blog-cloudflare-agent-readiness-score/

---

**TL;DR.** Cloudflare's Agent Readiness Score, published at isitagentready.com, tests roughly thirteen signals across four dimensions. The signals fall into three tiers: the basics every site should have (robots.txt, sitemap, llms.txt), the directional bets worth making now if you have any agent or developer surface (Content Signals, AGENTS.md, MCP server card), and the experimental layers that only matter if you ship an agent product yourself (Agent Skills, A2A agent card, Web Bot Auth). The score is useful as a thermometer. It's biased toward standards Cloudflare implements natively, so treat it as one input alongside Lighthouse and a classic accessibility audit.

## The four dimensions

Cloudflare's scorer groups its checks into four buckets.

**Discoverability.** Can an agent find what's on your site? Three signals: `robots.txt`, `sitemap.xml`, and Link response headers (RFC 8288). The first two are pre-AI baseline hygiene. Link headers are an underused way to advertise alternate representations of a page (a markdown version, a JSON-LD blob) via HTTP rather than HTML.

**Content.** Can an agent parse what's on your site without scraping the chrome? One signal here: Markdown content negotiation. Send `Accept: text/markdown` and the server returns a clean `.md` version of the same content. The pattern most sites use is a sibling `.md` URL or a content-negotiation rule at the edge.

**Bot Access Control.** Who can crawl, train, and use your content, and how do you express that? Three signals: AI bot rules in robots.txt (per-bot allow/disallow), Content Signals (the three-permission granularity for AI behavior), and Web Bot Auth (signed-request verification of well-behaved bots).

**Capabilities.** What can an agent *do* on your site? Six signals here, all centered on `/.well-known/` discovery paths: Agent Skills, API Catalog (RFC 9727), OAuth Server Discovery (RFC 8414), OAuth Protected Resource (RFC 9728), MCP Server Card, and the A2A Agent Card.

A separate Commerce section tests for x402, the Universal Commerce Protocol, and the Agentic Commerce Protocol, but doesn't roll into the displayed score.

## Tier 1: Ship now if you haven't

These are the signals every site should have, agent-readiness aside. The score will demand them; so will every other audit you run.

- **`/robots.txt`** with explicit rules for AI bots, not just the legacy `User-agent: *` block. At minimum cover GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, CCBot, and Bytespider. The [AI Bot Policy Generator](/tools/ai-bot-policy-gen/) emits the standard set.
- **`/sitemap.xml`**. Standard XML sitemap. Index sitemap if you have more than 50,000 URLs.
- **`/llms.txt`** at the site root. A markdown reading list aimed at LLMs. The [llms.txt Generator](/tools/llms-txt-generator/) drafts one from your sitemap.

If you ship those three correctly the score's Discoverability dimension is in good shape and roughly a third of the total is locked in.

## Tier 2: Directional bets worth making

The middle tier is where most decisions sit. Each of these has a clear use case but isn't yet universal. Ship them if the use case fits.

**Markdown content negotiation.** Two patterns work. The HTML pattern is a `<link rel="alternate" type="text/markdown" href="/some-page.md">` element in the head of each page. The HTTP pattern is server-side content negotiation against `Accept: text/markdown`. The HTML pattern is easier to deploy; the HTTP pattern is closer to the original web architecture intent. Either is acceptable to the score. The [Markdown for Agents Generator](/tools/markdown-for-agents-generator/) produces the markdown bodies.

**Content Signals.** Cloudflare's proposal for splitting AI permissions three ways. The mechanism is a `content-signals:` directive in robots.txt (and/or a meta-robots equivalent) that grants or denies `ai-train`, `ai-input`, and `search` separately. The combinations matter. Many publishers want `search=yes` (so AI search engines can include them) but `ai-train=no` (so models don't ingest them for training) and `ai-input=yes` (so a user pasting their URL into a chat session gets the content). Pre-Content-Signals, that combination was impossible to express in a standard way. The [ai.txt Generator](/tools/ai-txt-gen/) emits the directive in the canonical form.

**AGENTS.md at root.** Useful if your site has any developer surface (docs, API, repo). Read the [dedicated post](/blog/blog-agents-md-root-spec/) for the full breakdown.

**MCP Server Card.** A `/.well-known/mcp/server-card.json` describing your MCP server's tools and authentication. Only relevant if you ship an MCP server. If you do, publishing the server card is the discovery path agents use to find and authenticate against it.

## Tier 3: Experimental layers

The third tier is the part of the score where most sites legitimately have nothing. Ship these when you have the specific use case; don't ship them as ceremony.

**`/.well-known/agent-skills/index.json`.** Declares discrete callable skills your agent or site exposes. Different from MCP tools in that skills are a higher-level abstraction (a complete task with inputs and outputs) versus MCP's lower-level tool primitives. Worth publishing if you ship Cursor extensions, custom Claude Skills, or any other agent capability bundle.

**`/.well-known/api-catalog` (RFC 9727).** A discovery file pointing at every OpenAPI / GraphQL / API definition on your site. The format is a JSON index. If you publish public APIs, this gives any agent a single canonical entry point.

**`/.well-known/oauth-authorization-server` (RFC 8414).** Describes your OAuth authorization server's endpoints and capabilities. Required if you offer authenticated agent flows. The companion `/.well-known/oauth-protected-resource` (RFC 9728) tells agents which authorization server to use for a given resource. Both are mainstream OAuth; the agent-readiness twist is just publishing them at the well-known paths.

**`/.well-known/http-message-signatures-directory` (Web Bot Auth).** A directory of public keys used by well-behaved bots to sign their requests. Lets your site verify bot identity instead of blanket-blocking by user-agent. If you operate a bot, publishing your public key here is the signal-of-good-citizenship play. If you run a site, accepting signatures means you can let trusted bots through tighter access controls.

**`/.well-known/agent-card.json`.** Google's Agent-to-Agent protocol discovery file. Declares an agent's capabilities to other agents in the A2A ecosystem. Skip unless you ship an agent product.

## How the score is calibrated

Cloudflare's scorer favors signals Cloudflare can deliver natively from its own products: Content Signals (introduced via Cloudflare blog posts), Web Bot Auth (Cloudflare AI Audit feature), MCP Server Card (Cloudflare Workers + Pages templates). That's not a criticism. Every vendor's audit tool reflects its commercial worldview. The right reading of the score is: it's a useful thermometer for "is this site agent-friendly," it's a less useful answer to "is this site ranking well for AI search."

For the search-ranking question, you want a tool that scores against what AI retrievers actually use, which today is schema density, citation surface, and the same E-E-A-T signals that drive classical search. The [Mega SEO Analyzer](/tools/mega-seo-analyzer/) folds the Cloudflare-style agent-readiness signals into a wider scorecard that includes ranking authority, hygiene, performance, security, trust, and the new Helpful Content (HCU) dimension.

## A reasonable rollout order

If you're starting from zero, this is the order that has the highest payoff for the time invested:

1. Fix robots.txt to cover AI bots explicitly. (15 minutes)
2. Publish `sitemap.xml` if you don't already. (Eleventy, Astro, Next, Hugo all generate one with a plugin or built-in.)
3. Publish `llms.txt` at root. ([Generator](/tools/llms-txt-generator/), 10 minutes.)
4. Add Content Signals to robots.txt. (5 minutes, [ai.txt Generator](/tools/ai-txt-gen/).)
5. Add Markdown for Agents to your main pages. ([Generator](/tools/markdown-for-agents-generator/), per-page.)
6. If you have a developer surface, publish `AGENTS.md` at root. (30 minutes.)
7. If you ship an API, publish `/.well-known/api-catalog`. (15 minutes for a small API; longer for big surfaces.)
8. If you ship an MCP server, publish the server card. (Already required by the MCP spec.)

Steps 1 to 5 will move most sites from a Cloudflare score in the 20s to one in the 50s or 60s. Steps 6 to 8 push past 80 if and only if the use cases apply. Sites without a developer or agent surface should not chase the last twenty points by shipping ceremony files.

## Other scanners to pair it with

- **Lighthouse.** Classic Core Web Vitals + accessibility + best practices. Independent of agent readiness; covers the parts the score doesn't.
- **The [Mega SEO Analyzer](/tools/mega-seo-analyzer/)** for the agent signals plus ranking, HCU, security, and trust in one pass.
- **The [AI Posture Audit](/tools/ai-posture-audit/)** for AI bot policy consistency across robots.txt, ai.txt, and meta robots.
- **The [Agent-Ready Audit](/tools/agent-ready-audit/)** for the full well-known + agent-card set in one tool.

## Related reading

- [AGENTS.md: The Root-Level README That AI Coding Agents Actually Read](/blog/blog-agents-md-root-spec/)
- [llms-ctx.txt and llms-ctx-full.txt: The FastHTML Extensions to llms.txt](/blog/blog-llms-ctx-fasthtml-extension/)
- [Markdown for Agents: Serving Your Pages Twice](/blog/blog-fix-markdown-for-agents-warning/)
- [The Open Agent Protocol Stack](/blog/blog-agent-protocol-stack/)
- [The Agent Runtime: The New Browser Layer](/blog/blog-agent-runtime-the-new-browser-layer/)

Most of the signals above belong to the under-$100 AI stack we've been building all year. The full thesis (free SMB tools as top of funnel, agent-ready artifacts as load-bearing infrastructure, sub-$100 monthly spend) is the spine of [The $20 Dollar Agency](https://www.amazon.com/dp/B0FXJDLNG2).

## Fact-check notes and sources

- Cloudflare: [Introducing the Agent Readiness score](https://blog.cloudflare.com/agent-readiness/)
- isitagentready.com: [public scorer](https://isitagentready.com/)
- IETF: [RFC 9727 — API Catalog](https://www.rfc-editor.org/rfc/rfc9727.html)
- IETF: [RFC 8414 — OAuth 2.0 Authorization Server Metadata](https://www.rfc-editor.org/rfc/rfc8414.html)
- IETF: [RFC 9728 — OAuth 2.0 Protected Resource Metadata](https://www.rfc-editor.org/rfc/rfc9728.html)
- IETF: [HTTP Message Signatures (RFC 9421)](https://www.rfc-editor.org/rfc/rfc9421.html)
- Anthropic: [Model Context Protocol specification](https://modelcontextprotocol.io/)
- Google: [Agent-to-Agent (A2A) protocol announcement](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/)

*This post is informational. The Cloudflare Agent Readiness Score is a vendor-published metric and is not a search-ranking signal. Standards adoption is shifting; verify each spec's current status before building infrastructure around it.*


---

Canonical HTML: https://jwatte.com/blog/blog-cloudflare-agent-readiness-score/
RSS: https://jwatte.com/feed.xml
JSON Feed: https://jwatte.com/feed.json
Hero image: https://jwatte.com/images/blog-cloudflare-agent-readiness-score.webp
