Paste any URL. This tool fetches robots.txt, ai.txt (root + /.well-known/), the page's <meta name="robots">, and the X-Robots-Tag response header — then cross-references the allow/disallow signal for every major AI crawler. If GPTBot sees "Allow: /" in robots.txt but "Disallow: /" in ai.txt, behavior becomes unpredictable. The matrix below shows exactly where the signals diverge and how to align them.
The tool will fetch the page, read its robots meta + X-Robots-Tag, and pull robots.txt / ai.txt / .well-known/ai.txt from the same origin.
Overview
Per-bot posture matrix
Each row shows what the four sources say about a specific AI crawler. Disagreements are highlighted. Allow = the source explicitly permits crawling. Disallow = explicitly blocks. none = the source says nothing about this bot (falls back to the User-agent: * wildcard rule).
Findings
Source documents
Identity + AI-discovery files
Beyond crawl directives, AI engines look for these additional files to understand the site's identity, retrieval surface, and machine-readable policies. Missing files reduce AEO / answer-engine citation probability even when crawl directives are perfect.
One consolidated prompt covering everything the audit found: per-bot crawl/training posture, directive conflicts, missing discovery files, llms.txt structural issues, and any regenerated robots.txt / ai.txt if you used the regenerate panel. Copy this one, paste into Claude or ChatGPT, get a single prioritized fix plan.
AI fix prompt (crawl-parity only)
Narrower than the master prompt โ just the crawl-directive disagreements. Useful when you only want to fix alignment, not the full discovery-file picture.
Regenerate aligned files
The matrix above shows what your site currently tells each bot. This section lets you edit the stance (either quickly with a preset or per-bot) and emit a new robots.txt + ai.txt that expresses a consistent intent across every source โ so the disagreements flagged above are resolved in whichever direction you want.
Pre-filled from the audited site's current state.
Bulk actions:
Bot
Crawl (robots.txt)
Training (ai.txt)
Was
robots.txt
ai.txt
Deploy both files at the site root. For full control over the bot catalog (including legacy search bots, per-path disallow rules, and x-default sitemap declarations), switch to the dedicated AI Bot Policy Generator.