← Back to Blog

Why 'Optimize For AI' Is Too Generic To Be Useful

Why 'Optimize For AI' Is Too Generic To Be Useful

Every AEO blog post published between 2023 and 2025 gives the same advice: "Write clearly. Use headings. Add numbers."

That's not wrong, but it's not nuanced enough to explain why the same content ranks first on Perplexity and sixth on Gemini for the same query.

Each LLM has a different extraction profile. The model-level details matter more than the generic advice.

Per-model extraction preferences

Gemini prefers 40-60 word passages that read as self-contained definitions. The first sentence should restate the query in declarative form. Hedging ("might," "may," "in some cases") hurts extraction — Gemini picks the most definitive alternative.

ChatGPT favors definitive claims with specific numbers. A passage that says "repair usually takes between 4 and 6 hours at 70°F with standard shingle weight" wins over "roof repair can take a few hours depending on conditions." Lists and numbered steps extract well.

Claude tolerates longer sections (80-400 words is comfortable) and rewards nuanced phrasing with cited sources. Claude dislikes the over-simplified answers that win on Gemini; it prefers qualified assertions with attribution. Citation markers ([1], (2024), "according to") increase Claude's extraction rate.

Perplexity prioritizes recency and specificity. A passage with an "as of 2026" or "updated recently" marker wins over one that reads timelessly. Perplexity also rewards scannable structure — lists, tables, short paragraphs.

Same content, optimized for one model, loses on another.

What the Model-Specific Snippet Audit does

You paste a URL or article text. The tool:

  1. Splits the content into sections by paragraph breaks.
  2. For each section, computes heuristic signals: word count, average sentence length, presence of numbers, hedging language, list structure, citation markers, recency markers, query-first opening.
  3. Scores each section against each of the four model profiles (Gemini, ChatGPT, Claude, Perplexity).
  4. Highlights the best section per model — if it scores ≥75, that model will likely extract from this page.
  5. Emits an AI rewrite prompt that proposes targeted rewrites to lift sections below 75 on each model.

Reading the per-model scores

Each section gets four scores, one per model. Interpret them as extraction probabilities, not absolute ranks.

Best section ≥75 across all four models: strong. Whatever that section is, it's the page's AEO anchor. Protect it — don't restructure, don't move, don't bump the dateModified unless you're adding genuine updates.

Best section ≥75 on 2-3 models, lower on 1-2: partial optimization. The section works for certain retrieval patterns. The AI rewrite prompt proposes tweaks to bring the laggard models up.

No section ≥75 on any model: the page doesn't have a retrievable anchor. Either split off a new section specifically structured for AEO, or move the most extractable passage above the fold so Google's crawlers encounter it first.

The "one paragraph" AEO insert

The audit's most valuable output is the recommended new paragraph that scores ≥80 on at least three of four models. Its shape is convergent across models:

  • 45 words (in the Gemini sweet spot)
  • First sentence restates the query (helps all four)
  • 1-2 specific numbers (ChatGPT + Perplexity)
  • No hedging (Gemini + ChatGPT)
  • "As of [year]" recency marker (Perplexity + Claude)
  • One citation or source mention (Claude + Perplexity)

Write that paragraph. Insert it right after the H1 or after the first H2. That single paragraph often lifts AEO extraction probability 3-5x across all models.

Per-model fixes that don't overlap

To lift Gemini score: tighten a section to 40-60 words. Remove hedging. Lead with query restatement.

To lift ChatGPT score: add specific numbers. Remove "maybe," "might," "could." Expand to 60-200 words with structure (list, steps, explicit "first/second/third").

To lift Claude score: add citation markers or source attributions. Expand to 80-200 words if too terse. Include one nuanced qualifier ("in most cases, except when X").

To lift Perplexity score: add a recency signal ("as of Q1 2026", "updated April 2026"). Add one specific number. Use a bulleted list structure.

The AI rewrite prompt in the tool generates model-specific rewrites verbatim so you don't have to manually translate.

The compound play

Most SMBs only monitor one LLM (usually ChatGPT). When they "optimize for AI" they optimize for that one model. Per-model auditing reveals the lift available across all four, which typically adds up to 2-3x the total AI-referred traffic of single-model optimization.

Running this audit quarterly against the top 10 pages you want cited is a 2-hour quarterly investment that produces measurable per-model extraction shifts within 30 days.

Related reading

Fact-check notes and sources

  • Per-model extraction patterns: synthesized from observable behavior across Gemini, ChatGPT, Claude, Perplexity (2024-2026) plus each vendor's public technical documentation
  • 40-60 word Gemini sweet spot: observed across featured-snippet extraction patterns
  • Claude citation bonus: documented in Anthropic's retrieval-augmented-generation guidance

This post is informational, not AEO-consulting advice. Mentions of OpenAI, Anthropic, Google, and Perplexity are nominative fair use. No affiliation is implied.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026