There's an entire category of SaaS product now — Profound, Peec.ai, Otterly, Scrunch, LLMrefs, Indexly — whose value proposition is "we run prompts through ChatGPT/Claude/Perplexity/Gemini and count how often your brand shows up." Starting pricing is around $99/month. Enterprise plans push $499-$1,200/month. The category name most of them use: "AI visibility" or "AI share-of-voice."
The methodology, uniformly, is undisclosed. What isn't undisclosed is that you can do the same work by hand in an hour with the free tiers of all four LLMs. The new AI Visibility Prompt Pack tool generates the prompt list and the CSV scoring worksheet. You run the prompts. You log the hits. You get the same data.
This post explains why the methodology matters more than the subscription, how to run the pack properly, and what "share of voice" actually measures in an AI context.
What share-of-voice in AI answers actually measures
Share-of-voice (SOV) in the classic marketing sense is how often your brand appears in a category's advertising vs. your competitors. In the AI-answer context, it's measurably different but the name stuck:
- Mention rate. What percentage of prompts about your category return your brand name at all?
- First-mention position. When you do appear, are you named first, third, or last in the answer? First-mention rate correlates strongly with click-through.
- Sentiment. Is the framing positive, neutral, or hedged? "Acme is a popular choice" beats "Acme has been reviewed mixed."
- Citation footprint. Did the AI engine cite a URL from your domain, or did it cite a third-party review?
Paid AI-visibility tools report all four. The prompt pack tool generates the worksheet with columns for all four, multiplied by all four major LLMs.
The 20-prompt pack
The pack clusters into six intent types, each probing a different axis of visibility. The breakdown:
Unbranded (7 prompts). The true SOV test. "What are the best X?" — does your brand show up when no one's primed the LLM to think about you? If you appear here, you've earned AI mindshare organically. If you don't, branded prompts don't count.
Comparison (5 prompts). Head-to-head with each competitor you listed, plus a generic "alternatives to [top competitor]" prompt. The single most revealing prompt in the whole pack is usually "alternatives to [competitor]" — do you make the list? Most brands don't.
Branded (4 prompts). "What is [brand]?", "Is [brand] any good?", "Who uses [brand]?", "What is [brand] best known for?" — these test whether the LLM has an accurate definition of you. Wrong answers flag knowledge-graph gaps you need to close with entity-citation work.
Authority (3 prompts). Person-level and content-level recognition. "Who are the leading voices on X?" — is your founder named? "What are the most-cited articles about X?" — does your best post surface?
Local (1 prompt, optional). Only fires if you supply a city. "Best X in [city]" — relevant for local businesses or city-specific landing pages.
Entity (2 prompts). Knowledge-graph completeness. "Tell me about X — founder, founding year, where it's based" flags any information the LLM doesn't have or has wrong.
Twenty prompts total, covering the same ground as the $99/mo tools.
How to run the pack properly
The tool emits more than the prompts. It emits the exact run methodology the paid tools use internally. Four things matter:
First — incognito, every time. Run every prompt in a fresh incognito or private browsing window on every platform. Logged-in sessions skew the LLM toward content you've interacted with before, which is worst-case scenario: you think you're measuring SOV but you're measuring your own browsing history.
Second — no modifications. Copy the prompt verbatim, paste, hit enter. Do not rephrase. Do not add "please." The whole point is reproducibility — if you change the prompt between platforms, you're comparing four different experiments.
Third — log quickly. Appeared? (Y/N/Partial). First-mention position ("1st", "paragraph 2", "bullet 4", "not mentioned"). Sentiment (+/0/-). Cited URL domain (for Perplexity specifically, which shows its sources). Do this right after the response while it's fresh. Don't batch-log at the end.
Fourth — rerun quarterly. LLM training data updates and live-retrieval indexes shift. A snapshot from March tells you nothing about August. Save each CSV with the date in the filename and you've got a time-series chart of your AI visibility.
The full instruction document is included as a downloadable TXT the tool emits alongside the worksheet.
Why paid tools exist at all
If the methodology is this simple, why do paid tools exist? Three reasons:
Automation. Running 20 prompts across 4 platforms takes an hour. Running 100 prompts across 10 platforms weekly is the kind of labor a small team won't sustain. Paid tools automate via API.
API access. Some paid tools hit the LLM APIs directly, which means they're measuring raw LLM output uncontaminated by retrieval/browse features. Useful if you want to separate "LLM training knowledge" from "live retrieval." The free-tier approach measures the combined thing, which is what most users actually experience.
Historical data. Quarterly snapshots you run yourself give you a time series. Paid tools have multi-year industry-wide databases they benchmark you against. Useful if you're enterprise and want peer comparison. Noise for most SMBs.
None of these are worth $99-$499/month for a one-person shop or a small business. Run the pack by hand, save the CSV, rerun quarterly. That's the same signal.
When to upgrade to a paid tool
If you end up running the pack monthly, tracking more than 25 prompts, or comparing against 10+ competitors — the automation math starts to pencil out. The paid tools charge by volume, and above a certain volume they save time.
But the first three or four runs are always better done by hand. You learn what the LLMs actually say about your category, and you calibrate your expectations against the automated dashboards that come later. Going straight to the dashboard without ever reading an answer yourself is how you end up optimizing for a metric that doesn't reflect the user experience.
What the pack doesn't measure
A few things worth flagging that no SOV tool — free or paid — captures well:
- Action taken. Mention rate ≠ click-through rate ≠ conversion rate. You can be the top-cited brand and still not get traffic if the AI-answer citation is too abstract to compel a click.
- Long-tail visibility. 20 prompts is the backbone. Long-tail queries ("best X for Y in Z industry with W constraint") number in the hundreds and won't surface in any 25-prompt pack. Fan-out the seeds with the Query Fan-Out Generator to expand coverage.
- Context window poisoning. A competitor could spin up a hundred doorway sites that collectively train LLMs to frame you negatively. A prompt pack run today catches the current state; it doesn't surveil for active attempts to shift it.
For competitive monitoring of attempts to shift your positioning, Entity Citation Radar and AI Citation Readiness catch the upstream signals — Wikipedia edits, new review sites, schema drift — before they show up in prompt-pack output.
Related reading
- Query Fan-Out Generator — expand your seed keyword into the 30-60 sub-queries AI engines actually run behind each prompt
- AI Citation Readiness Audit — score a single article for the 14 signals that determine citation probability
- Entity Citation Radar — check whether high-authority sources (Wikipedia, Wikidata, Internet Archive) reference your brand
- Framework Origination Signal Generator — if you've coined a method or framework, claim it in the entity graph
- The $20 Dollar Agency visibility stack — how SOV fits into a full agency-tier audit offering
Fact-check notes and sources
- Profound pricing (as of 2026-04): tryprofound.com/pricing
- Peec.ai landing: peec.ai
- Indexly alternatives landing: indexly.ai/alternatives
- OpenAI ChatGPT free-tier availability (US): chat.openai.com
- Anthropic Claude free-tier availability: claude.ai
- Perplexity free-tier availability: perplexity.ai
- Google Gemini free-tier availability: gemini.google.com
The $20 Dollar Agency shows how agencies bundle AI visibility tracking into their reporting without burning margin on subscriptions. The prompt pack is the methodology; the book is the business model wrapped around it.