← Back to Blog

Why The Same Page Costs 30% More To Cite In One LLM Than Another

Why The Same Page Costs 30% More To Cite In One LLM Than Another

The same 1000-word article costs roughly 1300 tokens in OpenAI's o200k tokenizer, 1450 in cl100k, 1400 in Anthropic's, 1500 in Gemini's, and 1700 in Llama's. That's a 30% spread on identical English.

For retrieval-augmented systems pulling your page into a context window, this matters. Token budgets are finite. If your prose tokenizes inefficiently — long em-dash chains, dense numeric tables, exotic punctuation, code blocks rendered as prose — the system pulls less of your content per dollar. Less content per dollar means less of you in the answer.

Most SMB sites don't think about tokenizer efficiency at all. The ones that will start to over the next 12 months are the ones LLMs cite most often per crawl.

What the LLM Tokenizer Efficiency Audit does

You paste a URL or raw text. The tool:

  1. Estimates token counts across five tokenizer families: OpenAI cl100k (GPT-3.5 / GPT-4), OpenAI o200k (GPT-4o / o1 / o3), Anthropic (Claude 3 / 4), Google Gemini, Meta Llama.
  2. Computes tokens-per-word ratios for each.
  3. Detects token-inflation signals: high punctuation density, numeric density, code block frequency, table heaviness, URL repetition, exotic Unicode.
  4. Flags sentences that token-bloat the worst.
  5. Emits an AI prompt to rewrite the page for tokenizer-friendliness.

Why tokenization varies between models

Each LLM family trains its own subword tokenizer on a different corpus. OpenAI's o200k was trained on a heavier multilingual + code corpus, so it has more single-token entries for technical terms. Anthropic's tokenizer is closer to OpenAI's cl100k for English but handles whitespace differently. Gemini's tokenizer leans toward shorter subwords (better multilingual coverage, worse English compression). Llama's tokenizer is the most fragmented for English prose because it's optimized for being small and fast at inference.

Same English sentence, five different token counts. The spread is real and measurable.

The five token-inflation signals

1. Punctuation density. Em-dashes, ellipses, smart quotes, semicolons — each tokenizes as its own token in most models, sometimes two. A page with 5% punctuation-by-character has measurably worse compression than one with 2%.

2. Numeric density. Numbers tokenize per-digit in most models. "$1,234,567" is six tokens (or more), not one. Pages full of statistics, year ranges, phone numbers, prices balloon.

3. Code blocks rendered as prose. Function names, snake_case identifiers, JSON keys all fragment heavily. A code block that looks compact to a human is often 3-5x more tokens than equivalent prose.

4. Tables. Pipe characters, dashes, alignment whitespace each tokenize. A 10-row markdown table can easily be 200+ tokens for content that would be 50 tokens as prose.

5. URL repetition. Long URLs with query strings tokenize per-segment. Pages that link the same long URL 8 times pay for it 8 times in retrieval.

What the ratios mean

Tokens-per-word under 1.3: efficient. Content compresses well. Retrieval systems pull more of your page per dollar.

1.3-1.6: typical. Most well-written English prose lands here.

1.6-2.0: inflated. Either heavy punctuation, dense numerics, code blocks, or tables. Rewriteable.

Above 2.0: extreme inflation. Almost certainly a code-heavy or data-heavy page. Consider extracting the dense content into linked sub-pages so the main page tokenizes cleanly for retrieval.

The 30-day upgrade path

Week 1: Run the audit on your top 10 most-cited pages (whatever you can identify from server logs or AI-citation tools). Note tokens-per-word for each.

Week 2: For pages above 1.6 ratio: identify the worst offenders by section. Long tables → convert to bulleted prose. Code blocks → move to a separate /snippets/ page and link. Dense numeric paragraphs → break into sentences with the numbers spaced out.

Week 3: Rewrite punctuation-heavy passages. Replace em-dash chains with periods. Drop redundant ellipses. Use straight quotes if the design allows.

Week 4: Re-audit. Aim for 1.3-1.5 across all five tokenizers. Watch your AI-citation rate over the following 30-60 days.

Why this matters more in 2026 than it did in 2024

In 2024, retrieval systems mostly pulled fixed top-N passages and the per-token cost was hidden inside the model price. In 2026, several shifts:

  • Context windows are larger but retrieval budgets are stricter. A 1M-token context doesn't mean RAG systems pull 1M tokens — they pull what their cost ceiling allows.
  • Cost-per-citation is now visible in some agent frameworks. Cheaper-to-cite pages get cited more.
  • Multiple-tokenizer optimization matters because answer-engine traffic is split across OpenAI, Anthropic, Gemini, Perplexity, You.com, and several smaller providers — each with their own tokenizer and pricing.

A page that tokenizes well in all five families is a page that gets cited across all five answer engines.

Related reading

Fact-check notes and sources

This post is informational, not LLM-cost-engineering advice. Mentions of OpenAI, Anthropic, Google, Meta, Perplexity are nominative fair use. No affiliation is implied.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026