Most keyword lists arrive as a wall of text. You export from Ahrefs, Search Console, or a competitor crawl and end up with 400 rows in a spreadsheet, all ranked by volume. The implicit plan is to work top-down: highest volume first, one page per keyword, see what sticks. It does not work. You build a hundred shallow pages that cannibalise each other and rank for nothing.
What actually changes the shape of the output is one extra column: what stage of the funnel is this query? TOFU, MOFU, or BOFU. Awareness, consideration, decision. Once every row carries that tag, the list reorganises itself — you suddenly see which keywords want a long pillar guide, which want a side-by-side comparison page, and which are a product or pricing page pretending to be a blog-post idea. The funnel tag is a simple label, but it controls the format of the page you need to build, which controls whether you rank, which controls whether you convert.
The Funnel Keyword Audit does the tagging for you. Paste up to 2000 keywords — one per line, or CSV with keyword,volume,kd — and every row gets a funnel stage plus a four-way intent class (Informational / Navigational / Commercial Investigation / Transactional). The classification is rule-based, fast, and auditable. You can run it on a laptop in two seconds and read the output without a subscription.
How the tagging actually works
Intent is inferred from the modifier pattern on the keyword itself. The rules are short:
- Transactional — any of
buy,price,pricing,cost,cheap,discount,coupon,promo,deal,near me,order,book,subscribe,download,free trial,sign up,checkout,demo,purchase,for sale,shop. A query with any of these is BOFU. - Commercial Investigation —
vs,versus,compare,comparison,review(s),best,top,cheapest,alternative,alternatives,rating,ranked. These queries are mid-funnel; the user has a shortlist and is evaluating. - Informational — starts with
what / how / why / when / where / who / which / is / can / does / do / are / should / willor containsguide,tutorial,example,explain,definition,meaning,learn,tips,ideas,benefits,causes,symptoms,reasons. Top of funnel — research phase. - Navigational — single-term brand-like queries (
nike,kagi,stripe) or explicitlogin,sign in,contact. The user already has a destination in mind.
A keyword that matches multiple categories picks the strongest intent in that order — transactional beats commercial beats informational beats navigational. This matches how SERP layouts resolve the same queries; Google also downgrades ambiguity toward the more commercial result when modifiers collide.
The funnel stage is a simple map: Transactional → BOFU, Commercial Investigation → MOFU, everything else → TOFU.
This is not a neural-network classifier. It is a regex rule-set. That is deliberate. A classifier that's 96% accurate on some benchmark is worse than rules for this job, because rules are explainable: when a keyword lands in the wrong bucket, you can see why and add a single pattern. You cannot inspect a classifier's decision without a debugging trace and a test set. For something as consequential as deciding whether a keyword gets a product page or a blog post, I'd rather have rules I can read.
The distribution chart tells you more than the individual tags
The tool shows a TOFU/MOFU/BOFU bar after it classifies your list. That single chart is the most useful output. A healthy pillar-cluster site tends to land around 50–60% TOFU, 25–30% MOFU, 15–20% BOFU by count. Not by volume — by count. That's the mix that puts you in front of people early, keeps them through evaluation, and closes them at the end.
If your list comes back 10% TOFU and 85% BOFU, you're chasing bottom-funnel money keywords and ignoring the research phase. You'll convert the trickle of traffic you get, but you won't get the traffic, because you've built nothing that ranks for the questions your future customers are asking before they ever type your category into a search box.
If your list comes back 95% TOFU, you're building a content farm. Top-of-funnel traffic converts at tenths of a percent; without a MOFU or BOFU page to hand users off to, you're just feeding the analytics dashboard.
The imbalance is the point. Look at the chart before you look at the table.
What you do with the CSV
The per-keyword table on the tool page is downloadable as CSV. Drop it into Google Sheets next to your original list and you have a column to sort on. Common moves from here:
- Sort by funnel + volume. The highest-volume BOFU keywords are usually the pages you should build first. They close fast even without topical authority.
- Filter to just TOFU, then sort by volume. This is the briefing list for a pillar-cluster editorial calendar. Cluster the survivors by shared topic and you have your next quarter of pillar posts.
- Look at Commercial Investigation keywords. Every
X vs Yorbest X for Yis a ranking opportunity that converts MOFU-to-BOFU traffic if you build a proper comparison table with a buy-link at the end. - Flag Navigational queries. If any of your own-brand variations show up here, make sure the URL they'd hit is working and optimised. Brand-nav queries are the ones that absolutely must not 404.
The AI content-brief prompt
Below the table, the tool produces a prompt you paste into Claude, ChatGPT, or your terminal AI. It bundles the top 30 keywords per funnel stage, tells the model which content format each stage wants, and asks for a cluster-by-cluster brief: URL slug, format (pillar / comparison / product / FAQ / landing), target word count, H2 outline, and a priority-ranked list at the end.
Running that prompt through Claude Code (see the AI Terminal Kickstart) is the fastest way to go from keyword list to editorial calendar. The model does the clustering, proposes the page shapes, and ranks them by impact. You read the output, override a few judgement calls, and hand the briefs to a writer.
The prompt specifically asks for chunk-density compliance — 40 to 150 words per paragraph — because AI search is increasingly the discovery layer and each paragraph on a published page gets embedded as its own retrieval unit. A paragraph under 25 words has nothing an LLM can cite; a paragraph over 150 words gets split mid-thought. Briefs that ignore this end up with pages that rank but don't get picked up by Perplexity / ChatGPT / Gemini citations.
Where this fits with the other tools
The Funnel Audit is the first step in a chain. Once you have a tagged list:
- Feed the top-intent keywords into Keyword Inspection one at a time to get SERP-level gap analysis against your actual competitors.
- For each cluster the AI brief recommends, run Single Site Gen to scaffold the page with schema / JSON-LD / llms.txt / ai.txt baked in.
- Crawl the finished site with Link Graph to confirm your new TOFU pillars are linking forward to MOFU comparisons and those are linking forward to BOFU landing pages — the three stages should form a cascade, not three disconnected islands.
- Track LLM referral visits with the GA4 LLM Referral Segment so you can tell whether the content you built is actually getting cited.
The short version
Keyword lists are illegible until you sort them by intent. Intent is inferrable from the keyword itself, no NLP required. Once you have the sort, the content plan writes itself — and the only reason it's hard to do manually is that nobody wants to hand-tag 400 rows. The tool tags 2000 in two seconds. Paste, click, read the chart, build the pages.
Related reading
- The $20 Dollar Agency — Chapters 5-11 are the underlying methodology: keyword research, pillar-cluster modelling, schema, and the editorial cadence that turns a tagged list into ranked pages.
- The $100 Network — Chapter 22 (Programmatic Internal Linking) covers the cascade from TOFU → MOFU → BOFU pages that this tool's distribution chart is ultimately meant to serve.