← Back to Blog

Running Keyword Inspection Across a Network of Sites Without Losing a Day

Running Keyword Inspection Across a Network of Sites Without Losing a Day

Running a single site against its top ten competitors is straightforward. Running twenty sites against their respective top tens, while keeping track of which gaps overlap, which are unique, and which should be addressed in what order, is where the work becomes operational instead of analytical.

This is the problem the $100 Network playbook is built around: coordinating SEO and content across a portfolio of properties so the network compounds instead of each site competing on its own. The Keyword Inspection tool is designed for single-page work, but the way I actually use it on network sites is a batched, portfolio-level workflow that takes a morning, not a month.

This post is for operators who own or manage multiple sites and want to stop doing the same research twenty times.

The Problem With Per-Site Keyword Work

The temptation with a network is to treat every property as its own little SEO project. One spreadsheet per site, one audit per site, one content plan per site. This is how networks end up with twenty redundant pages on adjacent topics, no internal linking between them, and schema that contradicts itself across domains.

The better model is to treat the network as a single editorial calendar driven by topic clusters, not per-site keyword lists. Keyword Inspection fits that model because it gives you the same gap shape for any page you throw at it — which means you can compare the shape of the gaps across properties and spot patterns that a per-site view would hide.

A One-Morning Portfolio Workflow

Here is the workflow I use when I sit down to do a network keyword pass. It's built for someone running five to twenty properties. If you have more than twenty, cluster them by niche and do each niche as its own morning.

Pick your target pages first. Pull Google Search Console for every property in the network. Export the top 25 queries by impressions for each site. This gives you a list of queries the sites are already showing up for but not winning — which is exactly where Keyword Inspection delivers the most value.

Do the SERP pulls in one sitting. For each priority query, open an incognito Google window, run the search, and copy the top ten organic URLs into a throwaway text file. Label each block with the site and query. Doing this in a single pass is faster than context-switching between Google and the tool — it's boring but fast, maybe three minutes per query.

Run Keyword Inspection in a batch. Open the tool in a browser tab. For each query in your text file, paste the search term, your target URL, and the ten competitor URLs. Click Run. When it finishes, click the Download full JSON button. You'll end up with one JSON file per query, each timestamped and named by the search term. Drop them in a folder called network-kw-YYYY-MM-DD/. You now have a permanent record of every gap.

Grep across the JSON files. This is where the network view emerges. Open a terminal and run something like:

grep -h '"kw"' network-kw-2026-04-17/*.json | sort | uniq -c | sort -rn | head -50

That gives you a frequency-ranked list of keyword gaps that appear across multiple sites in your network — not gaps for one page, but gaps for the portfolio. These are the topics that matter for the network at large. Content you build around them serves more than one property.

Look for schema gaps that repeat. Run the same frequency scan on schema types:

grep -h '"type"' network-kw-2026-04-17/*.json | sort | uniq -c | sort -rn | head

If FAQPage shows up missing on fifteen of your twenty properties, you have a network-wide schema gap. Adding FAQ blocks to your template system (or your CMS partials) fixes fifteen sites at once instead of fifteen individual rewrites.

Using Shared Findings to Drive a Network Editorial Calendar

With the cross-site gap data in hand, your editorial calendar writes itself. Each row in the frequency-ranked keyword list is a candidate for either a pillar article on the network's flagship property or a series of linked articles across multiple properties.

The decision is one the $100 Network methodology frames as hub-vs-spoke. If a gap appears across most properties, it's a hub topic — build one deep pillar article on the strongest property, link every other property's related page to it, and let the authority concentrate. If a gap appears on only a few properties, it's a spoke topic — build locally on those sites without the linking overhead.

This decision is usually obvious from the grep output. A gap that appears in 15 of 20 queries is a hub. A gap that appears in 3 of 20 is a spoke.

Cross-Network Internal Linking, Informed by Gaps

One thing that is hard to see in a single Keyword Inspection report but obvious when you have twenty of them is who should link to whom. If Site A has a gap on a topic that Site B already covers well (the competitor analysis surfaces Site B's strong page in passing, via the shared-strength keywords), then the fix on Site A is not always "write new content" — sometimes it's "add a contextual internal link to Site B."

This is cross-network internal linking and it is the single highest-leverage move available to a network operator. The network's pages are already indexed, Google already trusts the relationships between them, and you're effectively passing authority between your own properties.

To operationalize it, when you review a gap, check whether any other property in your portfolio covers the topic. If yes, link to it from the weaker page. If no, add it to the editorial calendar.

Running Keyword Inspection Against Your Own Network Pages

There is a less obvious use of the tool that is specific to networks: using it as an internal benchmarking tool. Pick a topic where two of your own network properties both have a page. Run Keyword Inspection with one property as the "your URL" and the other nine slots filled with the actual Google top ten — including, if they rank, the second property from your network.

You'll get gap data as usual, but you'll also see how one of your own properties compares to the other. If the stronger property has schema or heading structure the weaker one lacks, that's a direct copy-and-paste opportunity — you own both sides of the comparison, so there's no need to guess at intent or brand voice.

Pairing With the Site Analyzer and Batch Compare

The Site Analyzer runs per-page deep audits with eighty-plus checks including mixed-content detection, Article-schema validation, image CLS, viewport zoom, AI crawler rules in robots.txt, and JSON-LD @context correctness. Run it on each of your priority pages after the rewrite to verify you didn't regress anything else in the process.

The Batch Compare is the complement for the portfolio view — it scores up to twenty URLs at once across nine buckets. Use it monthly to see which of your network's properties are ahead and which are behind on aggregate signals like GEO, AEO, and schema. Pair that with Keyword Inspection's per-page gap work and you have both the zoom-out view (where's the network weakest?) and the zoom-in view (what specifically is missing on this page?).

The skill level set on any tool applies to all of them — so if you prefer Advanced mode (fewer explanations, more deployable code), you only set it once and the AI prompts across Keyword Inspection, Site Analyzer, Batch Compare, and the E-E-A-T Audit all respect it.

Honest Scaling Limits

There are a few places the workflow above breaks down.

JavaScript-rendered competitors are a real obstacle. Some sites serve an empty HTML shell and render content in the browser. Keyword Inspection can't see through that — it reads the raw HTML. You'll notice these as rows with near-zero word counts in the raw data. For those, you have to live with partial data or substitute another competitor.

The tool does not crawl your whole network. You run it one page at a time. The portfolio pattern comes from aggregating the JSON files yourself, which takes discipline. If you try to do it ad hoc without the labeled JSON dumps, you'll lose the cross-site view within a week.

SERP volatility is real. The top ten today is not the top ten next quarter. Re-run the key queries every 60 to 90 days and diff the JSON to see who entered and exited the SERP. New entrants are worth reading in detail.

The full playbook for running a portfolio of properties that compound rather than compete is in The $100 Network. The tool below is the per-page engine that feeds it.

Run Keyword Inspection →

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026