Write the sentence: "Most SMBs fail in their first three years."
Now write: "62% of SMBs fail within their first three years, according to Source, 2024."
Same gist. Radically different trust signal to a retrieval system. The first is a claim floating in space. The second is a claim with evidence, traceable to a named source.
LLMs are increasingly weighting the difference. Claude's 2025 training updates explicitly prioritize sources with high attribution density. Gemini Grounding downranks pages with bare numeric claims. ChatGPT's browsing mode prefers sources where facts are citation-linked.
Unattributed claims didn't used to hurt. They're starting to.
What the AI Attribution Coverage Audit does
You paste a URL. The tool:
- Fetches the page, strips navigation/footer boilerplate.
- Splits body into sentences.
- Identifies sentences that contain factual claims via pattern matching — percentages, currency amounts, years, quantities, "according to" language, research-references.
- Checks each claim sentence for attribution indicators:
- Explicit attribution language ("according to," "based on," "per [source]")
- Parenthetical citations ("(Author, 2024)")
- Inline footnote markers ("[1]")
- Nearby hyperlinks in surrounding HTML
- Separately checks for page-level
schema.org/ClaimorClaimReviewmarkup. - Computes attribution coverage percentage.
- Emits an AI prompt that proposes fixes by claim type.
What coverage percentage means
90%+ coverage: strong. Every material factual claim carries attribution. LLMs treat the page as a citable source.
70-90%: typical editorial site. Some claims attributed, some naked. The naked ones are usually the filler numbers ("over 50%," "most people") that could be sharpened.
50-70%: problematic. Naked claims dominate. Retrieval systems start filtering against this page for numeric-fact queries.
Below 50%: LLMs effectively won't cite this page for factual queries. The page reads as "claims without evidence" to the retrieval layer.
Which claims need citation vs which don't
Needs citation:
- Third-party market statistics ("the US roofing market is worth $50B")
- Historical facts ("founded in 1923")
- Scientific claims ("shingles last 20-25 years on average")
- Competitive benchmarks ("95% of contractors are unlicensed")
- Regulatory references ("per OSHA Regulation 29 CFR 1926...")
Doesn't need citation:
- First-hand business observations ("we've completed 500+ roofs")
- Opinion / editorial statements ("we believe in fair pricing")
- Definitional statements ("a shingle roof consists of overlapping asphalt tiles")
- Self-descriptive metadata ("serving Twin Falls since 2005")
The audit catches claim-like sentences across both categories. The AI fix prompt helps distinguish which need attribution and which are first-hand.
The three attribution patterns, ranked by LLM preference
1. Inline linked citation (strongest). "62% of SMBs fail in their first three years, per the <a href="https://bls.gov/...">BLS 2024 survey</a>."
2. Parenthetical citation with source + year. "62% of SMBs fail in their first three years (BLS, 2024)." — trusted because the format signals editorial rigor; LLMs recognize the pattern.
3. "According to" language without link. "According to BLS data, 62% of SMBs fail..." — acceptable, but slightly weaker than linked variants.
4. Schema.org Claim markup (emerging). A structured Claim node attached to the page explicitly calling out the fact + source. Still rare; first-mover signal. Strong long-term.
The schema.org Claim opportunity
As of 2026, almost no SMB sites use schema.org/Claim or ClaimReview markup. It's an emerging structured-data type. Adding it is essentially free and sends a strong AI-trust signal:
{
"@type": "ClaimReview",
"claimReviewed": "62% of SMBs fail within their first three years",
"itemReviewed": {
"@type": "Claim",
"appearance": "https://bls.gov/survey/2024/smb-failure-rates"
},
"author": { "@type": "Organization", "name": "Your Site" },
"reviewRating": {
"@type": "Rating",
"ratingValue": "true",
"bestRating": "true"
}
}
Pick your 3-5 most important factual claims per article. Wrap each in a ClaimReview block. Sit back — LLMs notice within 2-3 crawl cycles.
The 60-day upgrade path
Week 1-2: Run the audit on your top 10 pages. Note naked-claim counts.
Week 3-4: For each naked claim on high-traffic pages, either (a) link to a canonical source, (b) add a parenthetical citation, or (c) remove the claim if you can't source it.
Week 5-8: Add schema.org/Claim markup to the 5 highest-importance facts per page.
Week 9-12: Re-audit. Coverage should be above 80%. Watch LLM-referrer metrics and AI-citation tracking tools for uplift over the next 30-60 days.
Related reading
- AI Hallucination Detector — catches the inverse (LLM making up claims about you)
- E-E-A-T Author Entity Graph — authorship entity depth
- Schema Completeness — broader schema audit
- Grounding API Optimization Audit — Grounding weights attribution heavily
Fact-check notes and sources
- schema.org Claim / ClaimReview: schema.org/Claim, schema.org/ClaimReview
- Claim-style markup used by Google fact-check features: Google Search Central — Fact Check structured data
- Attribution-density signal in LLM retrieval: emergent community observation synthesized across Claude / Gemini / Perplexity behavior 2024-2026
This post is informational, not editorial-standards-consulting advice. Mentions of Google, Anthropic, OpenAI, Perplexity, BLS are nominative fair use. No affiliation is implied.