The Edge Cache Effectiveness Probe is the audit you reach for when you already suspect a problem in this dimension and need a fast, copy-paste-able fix list. It reuses the same chrome as every other jwatte.com tool — deep-links from the mega analyzers, AI-prompt export, CSV/PDF/HTML download — but the checks it runs are narrow and specific.
Probes a list of URLs via the fetch-page proxy, analyzes Cache-Status / Age / CF-Cache-Status / X-Cache headers, and computes per-host edge-cache hit ratio. Flags URLs bleeding origin traffic.
Why this dimension matters
Core Web Vitals (LCP ≤ 2.5s, INP ≤ 200ms, CLS ≤ 0.1) are a Google ranking factor. They also gate how often a page can appear in AI Overviews — slow pages get deprioritized in the answer-generation step after the retrieval step. Field data (CrUX, 28-day rolling real-user) beats lab data (Lighthouse) for this reason: Google indexes field metrics, not synthetic ones.
Common failure patterns
- LCP hero image with
loading="lazy"— lazy-loading the LCP element defers it beyond the above-the-fold paint. The rule is: LCP image = no lazy, addfetchpriority="high", preload it in<link rel="preload" as="image">with the rightimagesrcsetfor responsive variants. - CLS from late-loading web fonts —
font-display: swapwithoutsize-adjust/ascent-overridefallback produces a layout shift at font load. Usefont-display: optionalon non-critical fonts, or match metrics precisely with a fallback stack. - INP from long tasks on interaction — a single
main-threadtask over 50ms blocks next-paint. The fix is breaking up handlers withrequestIdleCallback,scheduler.yield(), or post-message loops. Third-party analytics scripts are the most common culprit;defer+Partytown/ WebWorker offload helps. - TTFB regressions from cold serverless — scheduled functions with per-region cold starts add 300–1500ms to every geographically-distant visitor. Edge functions (Cloudflare Workers, Vercel Edge, Netlify Edge Functions) cut this to ~50ms.
How to fix it at the source
Treat CrUX field data as the authoritative measure; Lighthouse lab data is a proxy. Instrument your own RUM (the web-vitals library emits LCP/CLS/INP/TTFB/FCP to any endpoint) for percentile breakdowns beyond CrUX's 28-day window. For images: preload the LCP, lazy-load everything below the fold, always provide intrinsic width/height. For scripts: defer or async or Web Worker — never sync at the top of <head>.
When to run the audit
- After a major site change — redesign, CMS migration, DNS change, hosting platform swap.
- Quarterly as part of routine technical hygiene; the checks are cheap to run repeatedly.
- Before an investor / client review, a PCI scan, a SOC 2 audit, or an accessibility-compliance review.
- When a downstream metric drops (rankings, conversion, AI citations) and you need to rule out this dimension as the cause.
Reading the output
Every finding is severity-classified. The playbook is the same across tools:
- Critical / red: same-week fixes. These block the primary signal and cascade into downstream dimensions.
- Warning / amber: same-month fixes. Drag the score, usually don't block.
- Info / blue: context-only. Often what a PR reviewer would flag but that doesn't block merge.
- Pass / green: confirmation — keep the control in place.
Every audit also emits an "AI fix prompt" — paste into ChatGPT / Claude / Gemini for exact copy-paste code patches tied to your stack.
Related tools
- Mega Analyzer — One URL, every SEO/schema/E-E-A-T/voice/mobile/perf audit in one pass..
- CrUX Field Data Probe — Real-user Chrome UX Report p75 LCP / INP / CLS / FCP / TTFB via PageSpeed Insights API.
- Image LCP Candidate Audit — Identifies the likely LCP image, flags loading="lazy" conflict, missing width/height, no srcset, legacy formats..
- Font Loading Strategy Audit — Audits @font-face font-display, preload hints, woff2 coverage, Google Fonts vs self-hosted, FOUT/FOIT pattern risks, variable fonts..
- Critical CSS Inlining Audit — Counts render-blocking stylesheets, inlined