← Back to Blog

Top AI CLIs — How To Feed Them The Prompts Our Generators Build

Top AI CLIs — How To Feed Them The Prompts Our Generators Build

Every generator on jwatte.com outputs a prompt. Single Site Gen writes a multi-thousand-word site build prompt. The Mega Analyzer emits a prompt that covers every failed signal. The Syndication Planner writes a per-platform rewrite brief. All of them are designed to be pasted somewhere.

The question most readers don't ask out loud: pasted where, exactly. ChatGPT and Claude.ai are the obvious answers. But for anyone working on the command line. Especially when you're iterating on a prompt, feeding it different context, or piping files in. A CLI is a better fit. Here's the short list.

Claude Code

Anthropic's own CLI. Installs via npm install -g @anthropic-ai/claude-code. You run claude in a project directory and it opens an interactive session inside your terminal with full file read/write access.

Usage pattern with our generators: copy the generated mega-prompt to your clipboard, run claude, paste. Claude gets the prompt and your project files in one session and can make the edits directly.

It's the most capable for multi-file work. Agentic by default. It can plan, read, edit, test, commit. For anything more complex than a single-file rewrite, this is the one.

Gemini CLI

npm install -g @google/gemini-cli. Similar shape to Claude Code. An agentic session inside your terminal. Models: Gemini 2.5 Pro / Flash / Ultra. Connects to Google Search built-in, which is useful when the prompt asks for citation-ready research.

Usage pattern: gemini to start. Then paste. Works well for research-heavy prompts (the SERP Features, Entity Citation Radar outputs) where web-grounded answers matter.

aichat

A one-off Rust CLI from sigoden. cargo install aichat or grab a release binary. Supports most model providers (OpenAI, Anthropic, Google, Ollama, Mistral). Shines for one-shot queries and pipe-in workflows.

Usage pattern: copy the prompt to a file (prompt.md), then aichat --file prompt.md. Or pipe straight in: cat prompt.md | aichat. For the Single Site Gen output, aichat --file build-prompt.md > build-output.md writes the entire generated site files straight to disk.

OpenAI CLI (and llm)

Simon Willison's llm tool (pip install llm, then llm keys set openai) is the Swiss Army knife. Supports providers via plugins. OpenAI, Anthropic, Gemini, Mistral, local Ollama, anything with an API.

Usage pattern: llm -m claude-opus-4-7 < prompt.md. Or pipe from clipboard on macOS: pbpaste | llm -m gpt-5. Logs every conversation to a local SQLite database. Easy to search your prompt history later.

For the Mega Analyzer's fix prompts specifically, llm pairs well because you can run the same prompt through three different models and diff the outputs: cat fix-prompt.md | llm -m claude-opus-4-7 > a.md && cat fix-prompt.md | llm -m gpt-5 > b.md && diff a.md b.md.

Continue (VS Code / JetBrains)

Not a terminal CLI, but worth mentioning because it's the IDE-side equivalent. Install the Continue extension, hit Cmd+L, paste the prompt. Continue's project context picker lets you @mention specific files so the prompt has the right working set.

Usage pattern: generate a prompt from our tools, open your project in VS Code, paste into Continue's chat with the relevant files attached. For site-build work where the LLM needs to know what's already in the repo, this is better than a blind Claude Code session.

Aider

pip install aider-chat. Like Claude Code but git-native: every edit is a commit. For people who want the LLM's changes on an inspectable commit history from the start.

The pattern that actually ships

The generators on this site produce plain text. Text plays everywhere. Our usage pattern, roughly:

  • Single-URL audits (Mega Analyzer, Site Analyzer, E-E-A-T, Discover Readiness): paste into Claude.ai or ChatGPT web. Fastest.
  • Site builds (Single Site Gen): download the .md, feed it to claude in your project directory, let it write files.
  • Batch audits (Mega Batch, Batch Compare): run through llm so you can re-run the same prompt against different models.
  • Research-heavy tools (SERP Features, Entity Citation Radar, FAQ Harvester): Gemini CLI, because grounded search matters.
  • Multi-tool workflows (Syndication Planner output + Newsletter Swap brief + content calendar): aichat with file chaining.

The generated prompts are format-agnostic. Whichever of these fits your stack. Use it.

How to get the most out of a mega-prompt

Three habits pay back across all tools.

Pick the right level. The Mega / Quick / Mini toggle exists because Mega prompts are genuinely better for full builds and genuinely overkill for a single-file fix. Matching prompt size to task size is free iteration speed.

Check skill-level preamble. Site Analyzer and Mega Analyzer let you pick Beginner / Intermediate / Advanced. Advanced skips explanations and ships code; Beginner explains every fix first. If the model is producing more prose than you want, bump the level.

Run the audit after. Every site-build prompt is paired with an audit that grades the result. Single Site Gen → Mega Analyzer. Speakable Gen → AI Citation Readiness. The build-prompt-then-audit-prompt loop is the whole reason these tools exist together.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026