Nobody searches like they did two years ago.
When someone asks Perplexity, ChatGPT, or Google's AI Overview a question, the answer is no longer a list of ten blue links. It's a synthesized paragraph with footnotes — and those footnotes are the new page-one real estate. If your article isn't in the footnotes, you're invisible.
So the question a writer should ask in 2026 isn't "will this rank?" It's: "will an AI system cite this?"
Today I'm releasing a free tool that answers that question directly.
What it does
AI Citation Readiness Audit takes a URL or a block of text and scores it against fourteen signals that AI retrieval systems demonstrably weight when choosing sources. It returns:
- A 0–100 score with letter grade
- A pass / warn / fail breakdown for every signal
- Plain-English next steps for each failure
No paid API. No crawl budget to burn. Paste a URL, or paste the full HTML if CORS blocks the fetch. Results in under a second.
The 14 signals
Each signal maps to a behavior observed in how modern AI retrieval pipelines score sources:
- Word count — shorter than 800 words and most AI systems skip the page for synthesized answers
- Fact density — numbers and dates per thousand words signal factual content
- Named quotes — direct quotations attributed to humans indicate real reporting
- Authoritative citations — outbound links to
.gov,.edu, Wikipedia, PubMed, arXiv, and established news - Schema JSON-LD — Article + Person schema makes extraction unambiguous
- Canonical URL — prevents the AI from indexing a duplicate
- Author attribution — a real named author with Person schema or meta tag
- Date published and modified — AI heavily prefers recent content
- Original research signals — phrases like "we surveyed", "our data shows", "we analyzed"
- Lists and tables — machine-extractable structures AI answers reuse directly
- Question-style subheadings — H2s phrased as questions match common prompts
- Comparison or "vs" structure — side-by-sides are among the highest-cited patterns
- Semantic HTML — proper
<article>,<main>,<time>elements - llms.txt reference — a discoverable AI policy file signals you welcome ingestion
The tool gives green, amber, or red for each — and tells you what to fix if it's not green.
Why this isn't another SEO score
Traditional SEO scores measure what Google's crawler likes: keyword density, backlinks, page speed. Those still matter, but they don't predict whether Perplexity will cite you when someone asks a question.
AI retrieval pipelines weight different things:
- Extractability — can the system pull a clean quote or stat without hallucinating structure? Lists, tables, and semantic HTML beat paragraph walls.
- Provenance — does this source itself cite other sources? Academic-style outbound linking is a strong signal.
- Specificity — are there unique numbers, dates, or quotations that only this page reports? Original research signals matter because AI systems are trained to avoid duplicating the same quoted fact across redundant sources.
- Freshness —
dateModifiedin schema beats a human-readable date alone.
This audit grades those dimensions directly rather than inferring them from traffic or backlinks.
How to use the score
Treat it as a relative benchmark, not a promise. Here's the workflow I recommend:
- Run it on your three best-performing articles. Note which signals they already pass — those are your baseline.
- Run it on an article that isn't getting AI citations. Compare the weaknesses against your winners. The delta is usually obvious.
- Fix the failures in order of impact. Schema JSON-LD, authoritative citations, and original research signals produce the biggest lift. Word count and semantic HTML matter but rarely move the needle alone.
- Re-audit after edits. The score is deterministic — same input, same output — so you can measure your changes.
One caveat worth repeating from the tool's disclaimer: this audit uses on-page heuristics only. Real citation behavior also depends on domain authority, the prompt context, and each AI vendor's specific retrieval pipeline. A high score is necessary but not sufficient. A low score, though, almost guarantees invisibility.
Open the tool
Open the AI Citation Readiness Audit →
It's part of a four-tool release this week — three siblings join it in the full tools hub: Entity Citation Radar, Framework Origination Signal Generator, and Newsletter Swap Matchmaker. All free, all hosted here, all usable without an account.