← jwatte.com

AI Posture Parity Audit

Paste any URL. This tool fetches robots.txt, ai.txt (root + /.well-known/), the page's <meta name="robots">, and the X-Robots-Tag response header — then cross-references the allow/disallow signal for every major AI crawler. If GPTBot sees "Allow: /" in robots.txt but "Disallow: /" in ai.txt, behavior becomes unpredictable. The matrix below shows exactly where the signals diverge and how to align them.

Why per-bot signal parity matters for AI crawl behavior →

Audit URL

The tool will fetch the page, read its robots meta + X-Robots-Tag, and pull robots.txt / ai.txt / .well-known/ai.txt from the same origin.

Why this tool exists — deep dive

From per-bot matrix to master prompt →
How the tool grew from a 4-file crawl-parity checker to a 13-file discovery-surface audit with one consolidated fix prompt.
Crawl vs training — not the same thing →
Why "robots.txt=allow, ai.txt=disallow" is the canonical AEO-friendly pattern, not a conflict.
llms.txt structure matters →
Presence isn't enough; the spec calls for H1 + blockquote + H2 sections + link-list. The tool validates all four.

Related tools

→ AI Bot Policy Generator · ai.txt Generator · Mega Analyzer (full audit) · .well-known Audit