← Back to Blog

APE, RACE, CREATE, SPARK — The Four Prompt Frameworks Every Prompt Should Match One Of

APE, RACE, CREATE, SPARK — The Four Prompt Frameworks Every Prompt Should Match One Of

Every "bad prompt" I've seen debugged in the last year has had the same shape. The user knew what they wanted. They didn't tell the model all of it. Specifically: they left out one or two things from the list that turns into a prompting framework.

There are four named frameworks that dominate the 2026 prompting literature, and they overlap heavily. If your prompt satisfies any one of them, it's usually fine. If it doesn't satisfy any, the LLM is guessing about something it shouldn't have to guess about. The new Prompt Framework Auditor takes a prompt, scores it against all four, tells you which one it's closest to, and emits a scaffolded rewrite.

This post walks through the four frameworks, what each adds over the others, and when to pick which.

The four frameworks

APE — Action, Purpose, Expectation

The minimalist framework. Three elements:

  • Action. The verb — what should the LLM do? Analyze, summarize, list, identify, write, generate, compare.
  • Purpose. Why you need it — what will you do with the output?
  • Expectation. What the output should look like — format, length, structure.

APE is the floor. A prompt that doesn't have at least these three is leaving the LLM to guess about all three, which is the most common failure mode.

The prompt "Summarize this article in 3 bullets so I can decide if it's worth reading fully" is APE-complete. Action (summarize), purpose (decide if worth reading), expectation (3 bullets).

RACE — Role, Action, Context, Expectation

Adds two dimensions to APE:

  • Role. Who should the LLM be? "Act as a senior editor." "You are a mechanical engineer with 20 years of bridge-design experience."
  • Context. What background material does the LLM need? Source text, constraints, prior decisions.

RACE is the best all-rounder for knowledge work. Role framing alone lifts output quality substantially on any task where expertise matters — which is most knowledge work. Adding context lets the LLM ground its response in the specifics of your situation rather than generic best-practices.

"Act as a senior editor. Review this draft (pasted below) for tonal consistency against the brand voice defined in the attached style guide. Flag any passive-voice sentences and suggest active-voice rewrites. Return a table with three columns: Original, Issue, Suggested Rewrite." Role, action, context, expectation, all present.

CREATE — Character, Request, Examples, Additions, Type, Extras

The heaviest framework. Six elements:

  • Character. Same as Role. Persona the LLM should adopt.
  • Request. Same as Action. What to do.
  • Examples. Few-shot demonstrations of the desired output.
  • Additions. Context, constraints, background.
  • Type. What kind of deliverable — article, email, memo, list.
  • Extras. Tone, style, constraints, edge-case handling.

CREATE is designed for content production and style-sensitive work. The addition that matters most over RACE is Examples — few-shot examples lift output quality dramatically when the LLM needs to match a specific voice or format. If you have two or three previous examples of what "good" looks like, include them.

SPARK — Specificity, Purpose, Audience, Result, Kontext

Marketing-flavored framework:

  • Specificity. Concrete names, numbers, dates, brands — anything vague is a flag.
  • Purpose. What the output will be used for.
  • Audience. Who reads the output.
  • Result. What good looks like — success criteria, not just format.
  • Kontext. Context (forced spelling for the mnemonic).

SPARK's distinctive additions are Specificity and Audience. The audience element changes tone, vocabulary, depth, and pacing — all meaningfully. Specificity is really a quality-check; a SPARK-shaped prompt that's vague fails its own first criterion.

"Write a LinkedIn post (Audience: SaaS founders, 100-1000 employees) announcing our Q3 revenue milestone of $4.2M ARR (Specificity). Purpose: drive inbound investor interest. Result: 150-200 words, first-person voice, one concrete customer anecdote, ending with a soft CTA to book a demo. Kontext: we're a series-B sales automation company. Tone: confident, not bragging."

How the auditor detects each element

The tool parses your prompt against eleven detectable elements — role, action, purpose, expectation, context, examples, audience, type, tone, constraints, specificity — using keyword patterns and sentence-shape heuristics. For each framework, it scores your prompt as the fraction of that framework's elements it detected.

Role — detected via phrases like "act as", "you are", "as a", "take the role", "persona:".

Action — detected via imperative verbs: analyze, summarize, list, identify, write, generate, compare, evaluate, draft, create, rewrite, produce, etc.

Purpose — detected via "so that", "in order to", "the goal is", "purpose:", "objective:", "to help".

Expectation — detected via output verbs ("output", "return", "respond with", "format") combined with format nouns (table, list, bullet, JSON, CSV, markdown) and optional length specs (sentences, words, paragraphs).

Context — detected via "context:", "background:", "the following", "given", "attached", or prompt length > 240 chars (substantive length implies substantive context).

Examples — detected via "for example", "e.g.", "such as", "like:", "example:".

Audience — detected via "for [audience]", "target audience", "written for" with common audience keywords (reader, user, customer, founder, developer, etc.).

Type — detected via explicit deliverable nouns: article, blog post, email, tweet, memo, report, pitch, outline, script, press release, landing page, product description.

Tone — detected via "tone:", "voice:", "style:" or tone adjectives (formal, informal, direct, professional, terse, conversational, etc.).

Constraints — detected via "do not", "don't", "avoid", "must", "at most", "no more than", "limit:".

Specificity — detected via presence of numeric tokens OR two or more proper nouns.

It's a heuristic. It will occasionally false-positive or false-negative. But across a sample of a couple hundred prompts it correlates well with quality — prompts scoring 80%+ on RACE produce consistently better LLM output than prompts scoring under 50%.

Which framework to pick

The auditor tells you the closest fit. In practice:

  • Use RACE as your default. It's the best all-rounder, the detectable-element count is modest, and adding role + context lifts quality on almost any task. If you're not sure which framework, start here.
  • Use APE when you need fast. Triaging incoming tasks, iterating on a prompt in Claude, debugging an existing workflow. Three elements, no ceremony, get something working, then upgrade.
  • Use CREATE when style matters. Content production, creative writing, brand voice, anything with a distinctive output shape. The Examples element is what CREATE buys you over RACE.
  • Use SPARK when audience matters. Marketing, sales, communications, anything user-facing. Audience specification is the SPARK move.

The scaffolded rewrite

After scoring, the tool emits a RACE-shaped rewrite of your prompt with the missing elements pre-scaffolded as bracketed placeholders. You fill in the placeholders with your specifics, and the result is a prompt that satisfies the framework you were closest to without you having to restructure the whole thing.

The rewrite is not meant to be copy-paste-ready. It's meant to be paste-into-an-editor, fill-in-the-placeholders, copy. Manual last-mile. But the structural skeleton is there.

Alternatively — the tool emits a separate "copy LLM rewrite prompt" button. Paste that into Claude or ChatGPT and get a fully-populated rewrite back, no placeholders, based on what the LLM infers about your intent. This is the one-click path when the scaffolded version isn't enough.

Why this isn't Prompt Enhancer

Prompt Enhancer wraps a prompt in research-backed enhancement layers — ExpertPrompting personas, OPRO step-by-step breathing, EmotionPrompt stakes framing, 26-principles self-evaluation. It's additive: take any prompt, make it richer.

The Framework Auditor is structural. It doesn't add techniques; it maps your prompt against named skeletons and tells you which joints are missing. Use them together: audit first, then enhance. Or enhance first, then audit to check the result still fits the framework you were targeting.

Related reading

Fact-check notes and sources

  • APE framework (Action-Purpose-Expectation) origin: Anthropic prompt engineering guide
  • RACE framework in marketing/prompting: multiple secondary sources; earliest canonical use in SEO copywriting circa 2023
  • CREATE framework (Character-Request-Examples-Additions-Type-Extras): popularized in OpenAI community forums and packaged in promptingguide.ai
  • SPARK framework (Specificity-Purpose-Audience-Result-Kontext): marketing-flavored variant, circulated in LinkedIn content 2024-2025
  • Few-shot examples lifting performance: Brown et al. 2020, "Language Models are Few-Shot Learners"

The $97 Launch dedicates a chapter to prompting frameworks for non-developers building with AI tools. The Framework Auditor is the programmatic companion — paste a prompt, see what you missed, fix it.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026