Score any article on the four research-backed AI citation frameworks: BLUF (bottom-line-up-front), definite phrasing density, entity density, and strategic repetition. Per Kevin Indig's research, citation winners are roughly twice as likely to use definite language and average around 20.6% entity density (vs 5-8% for ordinary prose).
Each H2 section's first sentence should deliver the section's main point, not throat-clearing. We score the share of section-leading sentences that contain a definite claim (subject + assertion verb).
Citation winners use direct phrasing (is, refers to, means, defined as, equals) about twice as often as uncited content. We measure hedge words (may, might, could potentially, perhaps, seems, appears) per total sentences. Target: under 1.5%.
Cited passages average around 20.6% entity density (proper nouns, brand names, specific tools, numbers). Standard prose averages 5-8%. We approximate this by counting capitalized non-sentence-start tokens, named numbers, and known-domain references.
The strongest claim should appear in 2-3 placements (intro, mid-article reminder, conclusion). We extract the most-emphasized noun phrase and check whether it surfaces in at least three locations.
Heuristic on-page analysis only. Citation behavior depends on many factors outside any single page. Use the score as a structural-readiness indicator, not a guaranteed citation forecast. Indig's published research is the source for entity-density and definite-phrasing benchmarks.