Part of the AEO / GEO / AI-search audit tool stack. See the pillar post for the full catalog of sibling audits and where this one fits in the lineup.
Readability isn't a soft signal any more. Google's Helpful Content updates have shifted from "does this feel AI-written" to measurable surface features: sentence length, paragraph length, passive-voice ratio, reading-grade mismatch to expected audience.
AI-generated content is easy to detect not because AI writes poorly, but because AI writes uniformly. Human writing has high variance — short punchy sentences next to longer exposition, alternating active and passive voice, mixed paragraph lengths. Machine-generated drafts trend to the middle: every sentence 18-22 words, every paragraph 3-4 sentences, passive voice everywhere because the training distribution prefers formal tone.
The Readability Analyzer measures the five signals HCU-era quality systems consider.
The five metrics
1. Flesch-Kincaid Grade Level
The classic formula: 0.39 × (words/sentences) + 11.8 × (syllables/words) − 15.59. Outputs an approximate US school grade. Target depends on audience — general consumer: 8-9, developer docs: 11-12, academic/legal: 14+.
2. Automated Readability Index (ARI)
A parallel formula using character count instead of syllables (faster, slightly different output). Cross-check against Flesch-Kincaid — if they disagree by more than 2 grade levels, your syllable density is unusual.
3. Average sentence length (words)
Over 24 = fatigue signal. Under 12 = choppy. 15-20 is the sweet spot for most prose.
4. Passive-voice ratio
Percentage of sentences matching passive-voice patterns (was/is/are/were + past participle). Under 10% = active, engaging. 15-25% = noticeable drag. 25%+ = bureaucratic, demoted in HCU.
5. Sentence-length variance
Standard deviation of sentence length in words. Low variance (σ < 4) = robotic. High variance (σ > 10) = natural rhythm.
Plus: paragraph diagnostics
The tool also emits paragraph-level findings:
- Paragraph word-count distribution
- Whether paragraphs fit RAG chunks (30-120 words is the GEO sweet spot)
- Paragraphs over 200 words (too long for retrieval, users scroll past)
- Paragraphs under 15 words (often stranded sentences)
Why Flesch-Kincaid still matters in 2026
It's imperfect — it doesn't measure actual comprehension, just a proxy. But Google's readers include:
- Human readers — who skim and abandon if the first paragraph grade-level overshoots them
- Quality rater humans — who evaluate "can a typical user understand this" directly
- LLM retrievers — which generate better-extractable chunks from shorter, clearer sentences
- The HCU algorithm — which correlates readability dropoff with abandonment and ranks accordingly
Optimizing for human readability also optimizes for AI extractability, which optimizes for citations. The three objectives align.
How to use it
- Go to /tools/readability-analyzer/
- Paste a URL or drop raw text in the text area
- Tool scores it in <1 second
- Read the per-metric report
- Copy the fix prompt — it produces a rewrite pass that shortens sentences, breaks passive voice, and adjusts paragraph length to target a specified grade level
What the tool doesn't measure
- Factual accuracy — a readable lie is still a lie
- Topical depth — high readability + shallow topic coverage = still thin content
- Actual reader comprehension — grade level is a proxy; real comprehension depends on the reader's prior knowledge
For a broader content-quality audit, pair with HCU Pattern Detector.
Related reading
- HCU Pattern Detector — identify specific Helpful Content red flags
- GEO Content Extractability — AI-retrieval-readiness
- Featured-Snippet Extractability
Fact-check notes and sources
- Flesch-Kincaid formula: Flesch-Kincaid readability tests (Wikipedia summary).
- Automated Readability Index: Smith & Senter (1967) — original ARI paper.
- Helpful Content Update guidance: Google Search Central — Creating helpful, reliable content.
- Quality Rater Guidelines (readability criteria): Google Search Quality Rater Guidelines PDF.
This post is informational, not writing or SEO-consulting advice. Mentions of Google and similar products are nominative fair use. No affiliation is implied.