← Back to Blog

Findings lists don't persuade — so I built an audit narrator

Findings lists don't persuade — so I built an audit narrator

Here's a list of 30 SEO findings from a full site audit. You paste it into a Slack message to your marketing VP. What happens?

Nothing. The VP scans the first 4 items, zones out on "No meta description on /about," and skips to the next meeting.

Findings lists don't persuade. Narratives do.

The AI-Driven Audit Interpretation tool takes any findings list and rewrites it as an executive summary — categorized, prioritized by your business model, ending with a 30/60/90 sprint recommendation. Deterministic. No LLM API call. No token cost.

The title is a little misleading

"AI-Driven" is a stretch. The tool doesn't call an LLM. It runs a rules-based categorizer + severity detector + business-model weighting layer. The output is what an AI-driven tool would produce — narrative executive prose — but the engine is deterministic JavaScript.

I kept "AI-Driven" in the title because it matches the user's mental model of what the output feels like, and I wanted the tool discoverable by searches for AI-audit-interpretation tools. The pattern is intentionally transparent: you can read the rules, you know exactly what determines the output, and it always returns the same summary for the same inputs.

What it does

  1. Paste findings — one per line. Can be from anywhere: Mega SEO Analyzer output, Lighthouse report copy-paste, manual list, your own notes.
  2. Pick business type — SaaS, e-commerce, publisher, local service, B2B. Each has different priorities.
  3. Run — the tool categorizes each finding (hygiene / schema / performance / security / AEO / compliance / a11y / conversion / other), scores severity (critical / warning / info), and produces:
    • Executive summary (1-2 sentences with counts)
    • Per-category narrative paragraphs (sorted by business priority)
    • Business-model-specific commentary ("Ecom sites live or die on Product schema...")
    • 30/60/90-day next-step sprints

Why deterministic beats LLM for this job

LLMs are non-deterministic. Same findings, different paraphrases. Over multiple runs, the output drifts. For audits — where consistency across months matters — drift is a bug.

Deterministic generation guarantees:

  • Same findings → same summary
  • Executive can compare month-over-month summaries directly
  • No token cost at scale (imagine 1000 tenant runs)
  • No API dependency, works offline

Trade-off: the prose is less expressive than GPT-5. For this use case, I'll trade expressiveness for consistency every time.

Business-type weighting

Each business type has 3-4 priorities. Those categories sort first in the output. Business-specific commentary fires when certain conditions are met:

  • SaaS — prioritizes conversion + AEO + trust. Notes: "Slow SaaS landing pages kill signup; every extra second past 2s drops form completions ~7%."
  • E-commerce — prioritizes schema (Product) + CWV + Shopping feed. Notes: "Ecom sites live or die on Product schema; rich results drive CTR 20-40% higher."
  • Publisher — prioritizes article schema + content velocity + author authority + AEO. Notes: "Publishers need flawless hygiene because Google Discover and AI retrievers reject sloppy signals."
  • Local service — prioritizes GBP + local schema + NAP + reviews.
  • B2B — prioritizes trust signals + authority markers + lead form friction.

Same findings, different narratives depending on business type. An "LCP 3.4s" finding is "critical" for a SaaS landing page but "worth fixing" for a publisher article.

Example output

Input (paste):

[fail] Hygiene: No meta description
[fail] Performance: LCP 4.8s (POOR)
[warn] Security: No CSP header
[warn] AEO: No llms.txt file
[fail] Schema: Missing Article schema
[warn] A11y: 3 redundant alt text issues

Business: Publisher

Output (Executive Summary):

This audit found 6 items of note — 3 critical, 3 warnings. For a publisher / media site serving readers finding informational content, priorities below are weighted by what drives the outcomes you likely care about.

Output (On-page hygiene):

Critical gaps: Hygiene: No meta description. Publishers need flawless hygiene because Google Discover and AI retrievers reject sloppy signals.

Output (Suggested next steps):

  • Week 1: resolve the 3 critical items. These are usually one-line fixes per item.
  • Week 2: structured data sprint. Deploy Article, BreadcrumbList, and Organization schema where missing.
  • Week 2-3: performance sprint. Run the Code-Diff Patch Generator against your top 5 pages for one-shot fixes.
  • Week 3-4: AEO sprint. Publish llms.txt, claim Wikidata entry, add Person schema to every article.
  • Monthly: re-run Mega SEO Analyzer v2 with history, watch the Trend Dashboard.

The output is Markdown — copy and paste directly into email, Notion, Linear, Slack's markdown mode, or your existing reporting doc.

How to use it

  1. Go to /tools/ai-audit-interpretation/
  2. Paste findings list. Prefixes [fail] [warn] [info] help the severity detector but aren't required.
  3. Pick your business type.
  4. Click Run.
  5. Scroll through the narrative. Click Copy as Markdown for the full exportable version.

Where the findings come from

This tool is a consumer of findings; other tools produce them. Typical pipelines:

  • Mega SEO Analyzer v2 findings → paste into this tool → executive narrative
  • Lighthouse report bullet list → paste → narrative
  • Internal audit doc written by your team → paste → structured interpretation
  • Competitor audit run against their site → paste → "what they need to fix that you don't"

Each pipeline gives you the narrative layer paid audit tools bake in but charge a premium for.

Pair with the rest

Related reading

Fact-check notes and sources

  • Deterministic text-generation pattern: classic rules-based NLG (natural language generation), predates LLMs by decades. See SimpleNLG for the canonical open-source library.
  • Business-model weighting is subjective: based on observation of 50+ client audits across categories. Adjust in the script if your market differs.
  • Severity heuristics: fail/missing/broken → critical; warn/weak/thin → warning; else → info. Override by prefixing your findings with [fail] / [warn] / [info].
  • "7% drop per second on form completion" reference: Portent research.
  • "CTR 20-40% higher with rich results" reference: Milestone Internet research.

This post is informational, not SEO-consulting or engineering advice. Mentions of Google, Lighthouse, Slack, Notion, Linear, and similar products are nominative fair use. No affiliation is implied.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026