"Share of voice" in AI answers sounds like an NLP problem. It's not. It's a regex problem.
Paste an AI response. Count how many times your brand (and its aliases) appears. Count how many times each competitor appears. Divide brand-mentions by total-mentions. That's your SOV for that response. Run it across 10-25 responses from different prompts and you have the rolling metric that paid AI-visibility platforms charge $99-$499/month to report.
The Share-of-Voice Worksheet does the counting. LocalStorage keeps the last 25 runs so you build a multi-response trend without needing a database.
The three signals
Mention rate. Total times your brand (or any alias you listed) appears in the response. Case-insensitive, whole-word. This number alone is useful but misleading — a response about your brand will mention it 10 times, a response about your category will mention it once. Context matters.
First-mention position. Where the first brand mention lands, measured in characters from the start of the response. Low = good (named early); high = mentioned in passing near the end. The worksheet converts to percent ("27% into the response") which is easier to compare across responses of different lengths.
SOV percentage. Your mentions divided by (your mentions + all competitors' mentions). 50%+ means you dominate the named-brand space in the response. 20-40% is parity with multiple competitors. Below 20% means you're an afterthought.
The worksheet also shows per-competitor mention counts so you can see which competitors are eating the oxygen in the response.
Aliases matter
"Acme" and "Acme Corp" and "Acme Corporation" all need to count as your brand. The worksheet accepts a comma-separated alias list. Case-insensitive whole-word matching catches punctuation variations and sentence boundaries.
A frequent miss: the URL form. AI responses often cite "acme.com" in passing. Add that as an alias. Same for any product name that carries brand weight (Claude for Anthropic, Gemini for Google, etc.).
Why localStorage instead of a database
The worksheet keeps the last 25 runs locally, aggregates across them, and surfaces a rolling SOV. This is enough for single-user use. For team tracking, CSV-export each run and merge server-side — a Google Sheet with an import function replicates the aggregation.
The deliberate choice not to use a server: the worksheet stays free, client-side, and private. Your paste never leaves your browser.
What the rolling trend reveals
The single-response SOV is noisy. One prompt happens to mention you, one happens not to, and the variance across prompts swamps any signal. The rolling 25-run SOV is stable enough to detect real movement month-over-month.
Typical pattern: you push out a batch of new content targeting AEO signals, re-run the worksheet across the same 25 prompts a month later, and the rolling SOV lifts 3-5 percentage points. That's a real result. A single-response lift from 20% to 40% on one prompt tells you nothing.
Related reading
- AI Visibility Prompt Pack — generates the 20 prompts you run through each LLM; the worksheet scores each response.
- Citation URL Extractor — for Perplexity / Gemini / Copilot responses that cite source URLs.
- AI-Answer Sentiment — polarity / hedge / risk scoring on the same responses.
- Entity Citation Radar — upstream check for whether high-authority sources reference your brand.
Fact-check notes and sources
- Profound share-of-voice marketing page: tryprofound.com
- Peec.ai positioning: peec.ai
- Indexly alternatives comparison: indexly.ai/alternatives
The $20 Dollar Agency covers AI-visibility tracking as a client-reporting deliverable. The worksheet is the scoring half; the Prompt Pack is the prompt half.