← Back to Blog

Every new AEO tool for 2026 — llms.txt quality, MCP server audit, AI content disclosure, author authority

Every new AEO tool for 2026 — llms.txt quality, MCP server audit, AI content disclosure, author authority

Answer Engine Optimization is not SEO with a new name. AI retrievers score different signals, weight different corpora, and cite in different formats. Each tool below covers a specific AEO surface that Lighthouse never looks at.

1. llms.txt Quality Scorer

Most llms.txt files in the wild are boilerplate: a single H1, no link list, no descriptions. AI retrievers skip them. The scorer fetches your /llms.txt, /llms-full.txt, and /.well-known/llms.txt, counts link entries, section headings, description density, and calls out the patterns that turn a file from decorative to useful.

2. MCP Server Audit

The Model Context Protocol is the emerging standard for AI agents to discover and call services. If you run an agent-callable API, you need /.well-known/mcp-server.json with name, version, tools[], endpoint, and auth. The audit probes standard discovery paths (mcp-server.json, mcp.json, agent-card.json, ai-plugin.json), validates manifest shape, and checks CORS + rate-limit posture.

3. AI Content Disclosure Audit

EU AI Act Article 50 (effective August 2026) requires visible disclosure of AI-generated content. FTC endorsement guidance already does. The audit checks for visible "AI-generated" / "AI-assisted" text, schema.org creativeWorkStatus, SoftwareApplication author entries, C2PA Content Credentials references, and the presence of an /ai-policy or /editorial-policy page.

4. Author Authority per Article

Post-HCU, Google ranks on authorship signals hard. The audit scores an article on 8 dimensions: Article schema type, author in schema, author URL, rel=author link, visible byline, author page link, author photo, bio snippet. Scores 0-8. Fix prompts are per-dimension.

5. Live Citation Surface Probe

AI retrievers lean on 10-20 reference corpora to decide who to cite. The probe queries 18 of them — Wikipedia, Wikidata, Crossref, OpenAlex, Google Scholar, G2, Capterra, Trustpilot, BBB, Reuters, AP News, Crunchbase, GitHub, Stack Overflow, Reddit, Quora, YouTube, Medium — and reports which ones cite your brand and which don't. Gap list drives the next quarter's entity-building.

6. Knowledge Graph / Wikidata Audit

Checks for a Wikidata Q-number, Wikipedia article, sameAs array linking to authority profiles (LinkedIn, ORCID, IMDB, Crunchbase), and schema.org entity markers. Wikidata is the cheapest Knowledge Graph seed — lower bar than Wikipedia, same benefit to AI retrieval.

7. AI Crawler Log Analyzer

Paste your server access log. The analyzer separates GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, Bytespider, OAI-SearchBot, and other AI crawlers from regular Googlebot. Flags 4xx / 5xx spikes per bot, URLs being crawled, and crawler identity mismatches.

8. AI Posture Audit

Cross-references robots.txt, ai.txt, /.well-known/ai.txt, meta robots, and X-Robots-Tag headers per AI bot. Flags disagreements (e.g. robots.txt allows GPTBot but ai.txt says "No training"). Existing tool; now paired with the rest.

9. Chunk Retrievability Audit and Passage Retrievability

Existing tools — score how well your paragraphs stand alone as retrieval units. AI retrievers embed content as ~150-token chunks; paragraphs that aren't self-contained never get cited.

10. AI Citation Readiness

Existing tool — aggregate score across speakable schema, cite markup, first-paragraph definitional sentences, entity clarity. Batch 10's Mega SEO Analyzer v2 AEO dimension aggregates all of these.

And the orchestrator

Mega SEO Analyzer v2 has an AEO dimension that rolls up llms.txt discovery, AI-crawler policy presence, schema density, speakable, cite markup, and MCP discovery files. Run it; when AEO scores low, the specialists above are one click away.

Why AEO is a separate discipline

Google indexes HTML. AI retrievers tokenize HTML and embed it as vectors. That changes what signals matter:

  • Self-contained paragraphs (chunk retrievability) matter more than H2 structure.
  • Entity disambiguation matters more than keyword density.
  • Reference-corpus presence matters more than backlinks.
  • Explicit author credentials matter more than rel=author alone.

The tools above target those signals specifically. Running them is fast. Fixing is usually adding a sameAs array, publishing /llms.txt, claiming a Wikidata entry, adding a visible byline.

Related reading

Fact-check notes and sources

This post is informational, not legal, SEO-consulting, or compliance advice. Mentions of OpenAI, Anthropic, Google, Perplexity, ByteDance, Apple, Common Crawl, Wikipedia, Wikidata, Crossref, OpenAlex, G2, Capterra, Trustpilot, BBB, Reuters, AP News, Crunchbase, GitHub, Stack Overflow, Reddit, Quora, YouTube, and Medium are nominative fair use. No affiliation is implied.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026