The AEO nightmare scenario: GPTBot was allowed last month. This month it isn't. Nobody remembers which deploy changed it. Meanwhile every Perplexity answer about your category has stopped citing you. By the time someone diagnoses it, you've lost 90 days of visibility.
Three files control AI-crawler access: robots.txt, ai.txt, and llms.txt. All three drift independently. Deploys regenerate them. Agencies "clean them up." CMS plugin updates reformat them. The diff tool compares two snapshots side-by-side and tells you exactly what changed.
Robots / LLM Drift Diff handles all three files. Paste before + after contents. Get classified changes.
The five drift patterns it catches
Bot newly blocked. A Disallow: / was added under a User-agent: GPTBot section that didn't exist before. Critical — this silently kills AI citations for that bot.
Bot newly allowed. Opposite direction. Often intentional (you changed policy), sometimes accidental (CMS plugin "fixed" your file by deleting the restriction).
Path-level Disallow changes. A new Disallow: /blog/ blocks an entire section. Was that intentional or a stray keystroke?
Sitemap reference added / removed. Google loses the discovery channel when a sitemap reference disappears. Often happens during domain migrations.
Crawl-delay added. Slows Googlebot. Usually unwanted.
Why per-bot classification matters
The tool specifically flags AI-bot changes separately from general changes. A Disallow added under User-agent: * might be legitimate infrastructure hygiene. The same line added under User-agent: ClaudeBot is an AEO decision with revenue implications. Surfacing them separately makes it obvious which decisions need scrutiny.
The workflow
- Save a snapshot of your robots.txt / ai.txt / llms.txt today (copy-paste into a local file).
- After any deploy or plugin update, re-fetch the live files.
- Paste both into the diff tool.
- Review every change. Classify as intentional or accidental.
- For accidental changes, either revert via deploy or regenerate the file via AI Bot Policy Generator.
Doing this quarterly catches 90% of drift. Doing it post-deploy catches the rest.
Related reading
- AI Crawler Access Auditor — current-state per-bot verdict
- AI Bot Policy Generator — emit aligned files from policy
- AI Posture Audit — full 13-file discovery-surface audit
Fact-check notes and sources
- RFC 9309 robots.txt: www.rfc-editor.org/rfc/rfc9309
- Spawning ai.txt: site.spawning.ai/spaces/ai-txt
- llmstxt.org: llmstxt.org
The $100 Network covers AI-posture as a cross-site concern — one policy, N sites. The diff tool is how you catch per-site drift before it costs citations.