A problem with in-browser SEO audit tools: once you close the tab, the scan is gone. You re-run the audit next week, you get different results because the site changed, and you have no baseline to compare against. You found a great AI fix prompt generated from that scan, but you didn't save it. You shared the scan with a teammate by screenshotting — because there was no way to share the actual data.
The Mega Analyzer and Site Analyzer now have Export scan and Import scan buttons. Click export, download a .json file. Click import on a different machine or next month, and the entire scan rehydrates — every tab, every check, every score. The AI fix prompt regenerates from the imported data. No re-fetch required unless you want fresh results.
What goes into the exported JSON
The export captures everything needed to reconstruct the UI state without re-fetching:
{
"version": 1,
"tool": "mega-analyzer",
"timestamp": "2026-04-20T15:23:00Z",
"url": "https://example.com/",
"R": {
"url": "https://example.com/",
"d": { /* parsed page data: title, meta, H1/H2, schema, voice, images... */ },
"scores": { "overall": 86, "seo": 93, "schema": 100, "eeat": 65, ... },
"crawl": { "score": 88, "fails": [...], "checks": [...] },
"aux": { "robots": "...", "sitemap": "...", "llms": "..." },
...
},
"na": ["k3d2p", "1xm7q", ...]
}
The R field is the full scan state. DOM node references are stripped (they don't survive serialization), but every value derived from the DOM — title, meta description, H1 text, schema types, word count, voice metrics, crawl-validation results — is preserved.
The na field is your Not Applicable dismissals — the rule-ID hashes of every check you silenced. When you import, those dismissals come back. If you exported a scan with 20 dismissals and share it with a colleague, their import restores the same 20. The real-time adjusted score that shows in the dismissal banner rehydrates automatically — it's derived from the live DOM on each render pass rather than stored as a static value, so the post-import number will reflect the imported check set against the imported dismissal set. No stale cached number, no way for the exported score to lie about the underlying state.
Three things you can do with an exported scan
1. Regenerate the AI fix prompt without re-fetching
The Mega AI Prompt is assembled from the scan data (R) at the moment the tab renders. After importing a scan, switch to the Mega AI Prompt tab — the prompt reassembles from the imported data and includes your dismissals. Click the "Copy mega prompt" button. Paste into Claude, ChatGPT, or your local LLM. You've regenerated the prompt from a month-old scan without hitting the site again.
Why this matters: sites change. The scan you ran last week reflected the site as it was then. If you close your browser, lose the prompt, and re-scan today, the prompt will reflect a different snapshot. Exporting the original scan preserves the exact context you want the LLM to reason about.
2. Share scans with teammates or AI agents
Email the JSON to a colleague. They import, they see exactly what you saw. No screenshots, no "can you re-run it for me", no divergent results because they ran it from a different network (which could route to a CDN edge with different headers).
For AI agents specifically: export the scan, attach the JSON to a conversation, the agent has access to the full structured data. Much more useful than pasting a screenshot of the Full Summary tab. The agent can parse the JSON, reason about specific checks, and produce recommendations grounded in the actual scan.
3. Diff-track a site over time
Export today's scan. Ship fixes. Next week, re-run the audit and export again. Open both JSON files. Diff them. You now have a machine-readable changelog of what moved: score deltas by bucket, checks that flipped from fail to pass, new warnings that appeared as content was added.
A shell one-liner to spot check deltas:
jq '.R.scores' scan-2026-04-20.json
jq '.R.scores' scan-2026-04-27.json
Or a deeper diff comparing crawl-check arrays:
diff <(jq -r '.R.crawl.fails[].title' scan-2026-04-20.json | sort) \
<(jq -r '.R.crawl.fails[].title' scan-2026-04-27.json | sort)
Import behavior
Importing a scan loads the data and re-renders every tab against the imported state. A yellow banner appears: "Viewing imported scan from [timestamp] — some tabs may show stale data. Re-run for fresh results."
The banner flags the key limitation: the imported scan is a snapshot. The Accessibility (A11y) tab depends on a live DOM parse which isn't preserved in the export — you'll see the results that were captured at export time, but running the tool's probe-style checks (like the 404-template probe) won't re-fire. Everything else (scores, checks, full summary, mega AI prompt) works as-is.
The Re-run button next to Export/Import refreshes the scan: re-fetches the URL, re-analyzes, updates the R object with live data. Your N/A dismissals persist across the re-run. You go from "imported snapshot from last month" to "fresh scan with the same dismissal policy" in one click.
The file naming pattern
Exports are named <tool>-scan-<hostname>-<date>.json. Examples:
mega-analyzer-scan-example.com-2026-04-20.jsonsite-analyzer-scan-jwatte.com-2026-04-20.json
This is chosen so chronological sorting in your Downloads folder matches scan order. When you accumulate 6 exports for the same site, they sort by date automatically and you can diff adjacent weeks.
What's not in the export (by design)
The export omits:
- DOM nodes. The raw parsed
docobject is a live DOM that doesn't survive JSON. Every downstream value derived from it (text extracts, element counts, computed metrics) is preserved, but re-running the A11y tab's DOM-walkers against the snapshot would require re-fetching. - Browser-only state. localStorage entries other than the scan's own N/A set are not exported. If you have other site-specific preferences in localStorage (like a preferred prompt-quality level), those stay local.
- Network tracing info. The tool runs through a Netlify Function proxy; the export doesn't include timing data from that hop. If you need that, use Chrome DevTools network tab alongside the scan.
These omissions keep exports small (typically 40-120 KB) and keep imports fast.
Security note on importing foreign scans
The import parses JSON and assigns it to the tool's internal state. There's no code execution in JSON. But there is one consideration: if a scan JSON was tampered with to contain HTML/scripts in title/meta fields, the tool renders those fields back into the DOM via textContent and innerHTML template interpolation.
The tool escapes values via its esc() helper before rendering, which prevents XSS from malicious title strings. Still, the general rule applies: only import scans from sources you trust. If a colleague sends you a scan export and you have any doubt it's really from them, open the JSON in a text editor first and read it — the file is human-readable.
The tiny implementation
Serialize: walk R, drop DOM refs, stringify, trigger a download. About 30 lines of JavaScript.
function jwExportScan(tool, R) {
const sanitized = sanitizeR(R); // drops Nodes, Windows, functions
const payload = {
version: 1,
tool,
timestamp: new Date().toISOString(),
url: R.url,
R: sanitized,
na: [...loadNASet(tool, R.url)]
};
const blob = new Blob([JSON.stringify(payload, null, 2)], { type: 'application/json' });
const a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = `${tool}-scan-${new URL(R.url).hostname}-${payload.timestamp.slice(0, 10)}.json`;
a.click();
}
Import: read file, parse JSON, hydrate R, re-render.
function jwImportScan(tool, onHydrate) {
const input = document.createElement('input');
input.type = 'file';
input.accept = '.json';
input.onchange = async () => {
const data = JSON.parse(await input.files[0].text());
if (data.version !== 1) return alert('Unsupported scan version.');
localStorage.setItem(`jw-na-${tool}-${data.url}`, JSON.stringify(data.na || []));
onHydrate(data); // caller rebuilds R + re-renders
};
input.click();
}
That's the shape. The rest of the feature is UX — the export/import/re-run button row, the import-status banner, the N/A set persistence. All of it lives in /js/audit-toolkit.js — a single shared module used by Mega Analyzer and Site Analyzer.
Related reading
- Mark It N/A: Dismiss audit checks that don't apply — the companion feature; N/A dismissals export and import together with the scan
- Give Claude Code a permanent project memory — same philosophy (persistence over re-derivation) applied to IDE-level context
- How I built a browser-side SEO audit tool — broader architecture of the Mega Analyzer
Fact-check notes and sources
- JSON Feed spec (used as a reference design for self-describing JSON exports with
version+itemsfields): jsonfeed.org - MDN on Blob +
URL.createObjectURLfor client-side file downloads: MDN — URL.createObjectURL FileReader.text()and<input type="file">pattern for in-browser file import: MDN — File API- The philosophy of "let users own their data" in browser-side tools maps to the Indie Web principles — export formats that the user controls are the difference between "your data in our tool" and "your data that happens to render in our tool"
Try it: run the Mega Analyzer on any URL, click the Export scan button above the tabs, then click Import scan in a new tab and reload the same file. Everything rehydrates. Your N/A dismissals carry over. The AI fix prompt regenerates from the imported state.