Mega Analyzer Methodology

Version 2026.05 · Updated · Changelog · By

The Mega Analyzer at /tools/mega-analyzer/ grades a URL across 10 buckets in one pass. This page documents what each bucket tests, how scores are produced, and which standards each check implements. The rubric is published openly so operators can verify the score against their own reading of the page.

How the score is produced

Every check returns one of four states:

  • pass — full credit (1.0).
  • warn — half credit (0.5). Used when the signal is present but partially correct or partially deployed.
  • fail — no credit (0.0).
  • info — informational only; excluded from the score.

Each bucket's score is the unweighted average of its checks (warns count as 0.5). The overall grade is the unweighted average of the 9 audit buckets (the Mega AI Prompt bucket is output, not input), mapped to a letter grade:

  • A: 90 – 100
  • B: 80 – 89
  • C: 70 – 79
  • D: 60 – 69
  • F: below 60

The "Mark It N/A" feature lets an operator exclude any check that does not apply to the site (e.g. e-commerce schema on a brochure page). N/A items are removed from both the score denominator and the AI fix prompt.

The 10 buckets

1. SEO

Title length and pixel width, meta description, canonical, indexability (robots meta + X-Robots-Tag), heading hierarchy (one H1, ordered H2 tree), word count, internal-link signal, hreflang sanity, breadcrumb continuity, and Open Graph / Twitter card completeness.

2. Schema

JSON-LD extraction with recursive walk through @graph and nested object values. Detects FAQPage, Article / BlogPosting / NewsArticle, Product (with the physical-goods gate for shipping/returns), SoftwareApplication, HowTo, BreadcrumbList, Organization, Person, LocalBusiness, Service, SelfStorage, OfferCatalog. FAQ parity diff between visible questions and FAQPage entries flags ghost entries that Google suppresses. References: schema.org.

3. E-E-A-T

Author byline detection (rel=author, visible author block, microdata), datePublished / dateModified in Article/FAQPage/WebPage JSON-LD, visible "Last updated" stamp, author URL, sameAs array on Person/Organization nodes, OG Article extension (article:author, article:published_time, article:modified_time), Person schema with knowsAbout.

4. Voice

Flesch reading ease, grade level, passive voice rate, sentence-length variance, CTA density, pronoun balance. Flags AI-generated tells without false-positive triggering on humanized copy.

5. Mobile parity

Dual fetch with desktop Chrome and Android Chrome user agents. Structured diff of word count, H1/H2, schema types, HTML size. Surfaces mobile-first indexing regressions and cloaking.

6. Performance + AI signals

Render-blocking resources, image weight, total HTML size, render hints, key AI-readiness signals (robots.txt Content-Signal directive, Link response headers, /.well-known files: api-catalog [RFC 9727], MCP server card [SEP-2127] on both Cloudflare and SEP paths, Agent Skills v0.2.0, WebMCP imperative API, OAuth metadata [RFC 8414] and Protected Resource [RFC 9728], Web Bot Auth signature directory, x402 v2, MPP, UCP, ACP), markdown content negotiation, AGENTS.md, llms-ctx.txt / llms-ctx-full.txt, JSON Feed v1.1 enrichment fields, schema:CreativeWork.usageInfo.

7. Accessibility (WCAG 2.2 AA)

Color contrast, alt text coverage, heading hierarchy, ARIA validity, keyboard navigation hints, target size, form labels. Critical AA failures are rendered as a separate red-bordered section. Reference: WCAG 2.2.

8. Indexing hygiene

robots.txt presence and structure, sitemap.xml presence and lastmod truthfulness, image sitemap presence, ai.txt / humans.txt / security.txt [RFC 9116], canonicalization consistency, noindex / nofollow leakage, redirect-chain depth.

9. Retrieval (AEO)

Answer-engine and citation readiness: passage-retrievability paragraph length and self-containment, question-shaped H2/H3 framing, definition-first opening sentences, comparison-table presence, named entity density, anchor stability, llms.txt presence and shape per llmstxt.org.

10. Mega AI fix prompt

Output bucket. Assembles every fail and warn into a single prompt that is ready to paste into Claude, ChatGPT, Cursor, Windsurf, Codex, or any coding agent. Includes deeper-dive tool references for any dimension that needs more depth than the single-pass audit can give.

What the tool cannot see

Security headers, CSP, 301 redirects, UA-dependent rendering, and schema injected at build time live in a hosting / CDN / SSG layer that this audit cannot directly inspect. The tool fetches raw HTML; it does not execute JavaScript. JS-rendered content that requires a headless browser to materialize is not scored. This matches how most AI crawlers actually fetch a site today.

Standards cited

Source

The tool itself is at /tools/mega-analyzer/. The score logic lives in the page's inline JavaScript; the rendering helpers are in /js/audit-scorecard.js, /js/audit-toolkit.js, /js/aeo-geo-checks.js, and /js/indexing-hygiene-checks.js. Each rendered finding pill links to a deeper-dive tool that focuses on the same check in isolation.

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026