This is the follow-on to AI Terminal Kickstart (the install primer). That post got Claude Code, Codex, ChatGPT CLI, GitHub Copilot CLI, and Netlify CLI onto a fresh machine. This post starts where that one stops — you have the binary, you can type claude, and the blinking cursor is now staring back at you. What do you actually type first, and why?
📚 The full series — Claude Code track + Codex mini-series:
Claude Code track:
- AI Terminal Kickstart — install the CLIs on a fresh machine (starting point).
- You've Got The CLI Installed — Now What? (this post) — plan mode, CLAUDE.md, slash commands overview, Day 0 → Day 7 ramp.
- Skills, Rules, Memory — deeper dive — the four-layer hierarchy with real migration examples.
- Adversarial subagents + worktree isolation — Boris Cherny's parallel-Claudes review pattern, in practice.
- Hooks implementation guide — kill approval fatigue with 5 concrete hook recipes.
- /loop vs /schedule — watchdog vs alarm clock, with working example tasks.
- CLAUDE.md anti-patterns — the 8 patterns that cause your file to rot.
- Auto Mode case studies — four real cases where the classifier got it right, wrong, and almost right.
- /insights-driven iteration — the 30-minute Monday ritual that compounds.
- Remote Control + Channels — iMessage / Telegram / Discord dispatch setup.
- Multi-model routing — when to break out of Claude-only.
- Agent skill marketplace — Skills 2.0, evaluating before installing, publishing your own.
Codex mini-series (OpenAI CLI companions):
- Codex CLI after install — OpenAI-specific first-week habits, model selection, cost mechanics.
- Codex vs Claude Code — task-level comparison — where each one actually wins.
- Prompt style — GPT vs Claude — the four stylistic differences that matter in practice.
- ChatGPT CLI / Codex CLI / GitHub Copilot CLI — positioning — three products that sound alike and do different jobs.
- Two-CLI workflow — split-screen routine for running Codex + Claude Code in parallel.
Open-weights + Google-stack track:
- Qwen self-hosted at home — when open-weights beats proprietary, hardware tiers, Ollama setup, quantization trade-offs.
- Gemini CLI — when to use — long context, multimodal, Google-ecosystem fit, and where Gemini falls short against Claude Code.
- Gemma — where it fits next to Qwen — Google's open-weights counterpart; when to pick it over Qwen and vice versa.
The short answer: plan before edit, narrow context before prompting, and turn anything you repeat twice into an artifact the next session inherits.
The long answer is the rest of this post.
The CLI is the runway, not the plane
A common mistake after install is treating Claude Code as a smarter autocomplete — ask a question, paste the answer, move on. That works for the first few sessions. It degrades once the project grows past 10 or 20 files, because the model is making decisions without a shared understanding of the repo, your conventions, or what you've already tried. The friction shows up as: re-explaining the same constraints every session, fighting Claude about a convention you thought you'd agreed on three sessions ago, or ending the day with six tabs of abandoned sessions you can't remember the state of.
The fix is an operating model — an explicit structure for how you plan, execute, track context, and hand off between sessions. Anthropic's internal practice and public essays from Boris Cherny (creator of Claude Code), Andrej Karpathy, and other power users converge on the same five habits:
- Always-on context stays small. The stuff in every prompt lives in CLAUDE.md, not in the chat.
- Repeated procedures become skills or slash commands. You stop typing the same 10-line setup each session.
- Active sessions are protected from pollution. You use
/btw,/fork,/rewind, and/compactto keep the main thread coherent. - Parallel work is isolated. Worktrees and subagents run in their own sandboxes so they don't step on each other.
- Guardrails remove noise, not judgment. Auto Mode, hooks, and rules handle routine approvals so you only see the prompts that actually need review.
Install is done. Adopting the operating model is the rest of the work.
Before anything else — turn on plan mode
The single highest-leverage habit after install is telling Claude to plan before editing. Not "start writing code," not "go implement X." Plan.
A plan-first prompt looks roughly like:
/plan Before you touch any files, produce a plan for <task>. Include:
1. Which files will change and why
2. The order of changes
3. Risks + edge cases
4. A short acceptance-test list
Do not edit anything until I approve the plan.
Two things happen. First, Claude commits to a structure you can read and push back on before code gets written. Catching "actually, don't refactor this helper, just extend the caller" at plan time is a 30-second fix; catching it after Claude has rewritten 400 lines is a cleanup pass. Second, the plan becomes a checklist you can reference for the rest of the session — "we said we'd skip the auth changes for now, please don't touch src/auth/."
You can also launch in plan mode from the CLI:
claude --permission-mode plan "Refactor the checkout flow to extract <helper> into a reusable module"
And there's a settings-level default if you want every session to start in plan mode until you cycle out:
{
"permissions": {
"defaultMode": "plan"
}
}
Plan mode is not a ceremony. It is the difference between a session that compounds and a session that churns. For anything that touches more than one file — use it.
CLAUDE.md as a living contract
Once plan mode is your default posture, the second-highest-leverage habit is maintaining a real CLAUDE.md at the root of every project.
What goes in:
- Project purpose in one paragraph (what this repo does, who uses it)
- Stack (languages, frameworks, runtimes, deploy target)
- Code-style preferences (naming, module layout, test conventions)
- Commands (dev server, build, test, deploy) — every routine command with its exact flag set
- External services (APIs, env var names — never values)
- Anti-patterns — things you've already decided not to do, with a one-sentence reason each
- Known quirks — surprising behaviors that a new session would otherwise stumble into
What stays out:
- Secrets. Ever. Use env var names only.
- Per-session state. CLAUDE.md is for always-true facts, not "we're currently working on branch X."
- Documentation a human reader wouldn't want. If a line isn't useful to a person onboarding to the repo, it isn't useful to Claude.
What makes it a living contract: every session updates it. Finished a refactor that changed an anti-pattern? Update CLAUDE.md. Discovered a quirk that bit you mid-session? Add it. The file's signal-to-noise improves across weeks, and the model spends less time re-discovering things the file already knows.
A useful discipline: at the end of each session, spend 30 seconds asking "what did I just learn that the next session should know by default?" and put it in CLAUDE.md. The compounding is why Boris Cherny's published workflows describe CLAUDE.md as "the living contract."
Further reading: Arize's CLAUDE.md best practices has a prompt-learning framing; Anthropic's Effective context engineering for AI agents covers the theory.
The slash commands that matter, grouped by intent
Claude Code ships with a lot of slash commands, and the list keeps growing. Grouping them by what you're trying to accomplish makes them easier to keep in working memory than trying to memorize the list.
Learning the surface
/powerup— built-in interactive onboarding. Run it after any Claude Code update. It walks you through features that most users never discover (hooks, subagents, worktrees, rewind, skills, MCP). Fifteen minutes, high-signal./insights— analyzes your last ~30 days of sessions and writes an HTML report (usually to~/.claude/usage-data/report.html). Shows where time and tokens go, which sessions you abandoned, recurring friction points. The actionable pattern: take the top 1–3 friction points from/insightsand fix each one by promoting a rule into CLAUDE.md, creating a skill, or adding a hook.
Context hygiene (keep the main thread coherent)
/btw— cheap single-turn side channel. "By the way, what's the difference between X and Y?" Claude answers from the existing prompt cache without consuming turns or polluting the main thread. Read-only; can't edit files or call new tools./fork— clones the current session into a new one that inherits conversation history up to the fork point. Use when you want to explore an alternative approach deeply without throwing away the main thread.claude -r "session-name" --fork-sessionfrom the CLI opens the fork in a second terminal./rewind— surgical state restoration. When pollution is already in the thread (bad path, wrong file read, misremembered context),/rewindrolls back to a checkpoint. Fastest trigger is Esc twice./compact— compresses the current context window in place, keeping working state but dropping raw history. Use when the context is not wrong, just large.
Quick selection: question → /btw. Alternative exploration → /fork. Wrong direction → /rewind. Too big but correct → /compact. (Reference: Mastering Claude Code's /btw, /fork, and /rewind and the official slash-commands docs.)
Automation
/loop <interval> <prompt>— in-session recurring prompt./loop 30m run tests and summarize failuresturns your session into a watcher. Shines for polling compilation, watching deploys, or auto-regenerating summaries. Lightweight, lives inside the running session./schedule— cron-style scheduling of remote agent routines. Runs even when no session is open. Use for: nightly test runs, weekly dependency audits, morning "open issues + PRs summary" reports. (Official: Scheduled tasks docs.)
Rule of thumb: /loop is for "watch this while I work." /schedule is for "do this even when I'm not here."
Multi-agent work (bounded, isolated, reviewable)
/simplify— after you implement a feature, spawn parallel review agents looking at code reuse, code quality, and efficiency. Each review runs concurrently with its own context, findings aggregate, fixes applied with approval. Focus with/simplify focus on memory efficiencywhen you want a specific angle./batch— the most ambitious bundled command. Decomposes a single change description ("migrate src/ from Solid to React") into 5–30 independent units, spawns one agent per unit in an isolated git worktree, each implements + tests + opens a PR. Review the plan before approving; decomposition quality determines everything downstream.- Subagents — spawn a task-shaped sub-Claude with its own clean context to do a bounded research job. Used adversarially in Cherny's workflow: two subagents with different review roles critique each other before you accept. (Official: Subagents docs.)
- Worktrees —
claude --worktree feature-auth(or-w) creates an isolated git worktree for parallel work. Five parallel Claude sessions each on their own worktree is Cherny's published default for complex tasks. Collisions are impossible because each agent has its own checkout.
Debugging
/debug— when Claude Code itself is misbehaving (hooks failing, Bash tool weirdness, skills not loading),/debugreads the session debug log and diagnoses. Accepts a hint:/debug why is the Bash tool failing?. Run this before filing a GitHub issue; it often finds the config problem./claude-api— activates automatically when a repo importsanthropic,@anthropic-ai/sdk, orclaude_agent_sdk. Loads the relevant SDK reference (tool use, streaming, batches, structured output, prompt caching, common pitfalls) so Claude is "API-smart" in that repo without you having to paste docs.
Skills, Rules, Memory — progressive disclosure
Stuffing everything into one giant CLAUDE.md is the second-most-common mistake after install (first is typing "go" before planning). The pattern that scales is progressive disclosure — four layers, each smaller and more situational than the last.
- CLAUDE.md — always-true facts that apply every turn of every session. Project purpose, stack, deploy command. Every prompt sees this.
- Rules — file-scoped or directory-scoped constraints. "Files in
src/auth/must pass the security-review checklist before merge." Applies only when Claude is touching that path. Split rules out of CLAUDE.md when the file grows past ~200 lines — smaller, scoped files are faster for the model to apply correctly. - Skills — repeatable procedures packaged as invokable artifacts. A skill is a named, parameterized recipe ("run the pre-deploy check," "rebuild the tool-nav block"). Invoked with a slash command or referenced by other agents. The 2026 evolution: Skills 2.0 makes skills programmable — they can take inputs, return structured output, be composed with other skills.
- Memory — learned facts saved across sessions. "User prefers functional Svelte components over class-based." "The NextDNS account uses profile ID X." Memory is the persistence layer between sessions; CLAUDE.md is the persistence layer within sessions.
When to use which: always-true → CLAUDE.md. Scoped → rule. Repeatable workflow → skill. Learned fact from past conversations → memory. (Reference: Save Hours: Stop Repeating Yourself to Claude: Skills, Rules, Memory — breaks down the boundaries by example.)
Hooks — making Claude deterministic for the boring parts
Hooks are shell commands the harness runs in response to events — before a tool call, after a file edit, on session close. They're how you make certain actions deterministic so Claude isn't asked to remember them every time.
Practical hook examples:
PreToolUsebefore Bash — auto-approve any command matching a known-safe regex (^npm (test|run build|run lint)$), block the rest.PostToolUseafter Edit — runnpx eslint --fix+prettier --writeon the edited file so formatting is never something Claude has to get right manually.Stop— write a session summary to~/.claude/session-log/<date>.mdon session close.UserPromptSubmit— prepend the current git branch + diff status to every prompt so Claude always knows the repo state.
(Reference: Claude Code Hooks: Making AI Gen Deterministic and the Hooks Implementation Guide: Audit System.)
Hooks are where a mature workflow pulls ahead of a casual one. Casual users approve the same npm test 30 times a day; mature users wrote a hook on day 3 and never see that prompt again.
Auto Mode — when it's a win, when it's a trap
Anthropic's March 2026 Auto Mode uses model-based classifiers to screen proposed actions, letting routine operations proceed and escalating the risky ones. Enabled via:
claude --enable-auto-mode
Then inside a session, Shift+Tab cycles through permission modes until "auto" is active.
When Auto Mode is a win:
- Trusted local developer environment. Your own repo, your own machine, no production systems in the loop.
- Tasks with clear rollback (anything under git; anything on a branch that can be force-reset).
- Routine actions that you would otherwise click-through approve anyway — lint fixes, file renames, dependency installs from known registries.
When Auto Mode is a trap:
- Untrusted environments or any repo where a prompt-injection could reach shared infrastructure.
- Production deploys. Explicit gate every time.
- High-blast-radius operations (database schema changes, DNS edits, credential rotations). Even the classifier catches most of these, but "most" is not "all."
- Regulated work where you need to defend every action in an audit.
The right mental model: Auto Mode reduces approval fatigue without eliminating approval judgment. The classifier is good at the routine middle; it is not a substitute for a human on the risky edges. Anthropic is explicit about this in the official essay — Auto Mode is a better trade inside trusted dev environments, not a universal answer.
Pair Auto Mode with hooks that have explicit deny rules for commands you never want auto-approved — that's the combination that actually feels safe.
The /insights iteration loop — most-overlooked daily habit
Most install-then-use flows stop at "I can get Claude to do things now." The tier-up is measuring what's working and iterating.
/insights produces a local HTML report: tool-usage patterns, where tokens go, which sessions you abandoned, what caused restart loops. Read the report. Pick the top 1–3 friction points. For each:
- Friction: "I keep re-explaining the deploy command." → promote it into CLAUDE.md's Commands section.
- Friction: "The same pre-flight check before every deploy." → package as a skill.
- Friction: "Approvals for known-safe Bash commands slow me down." → add a hook with an allowlist.
- Friction: "Long sessions get incoherent after the third topic switch." → build the habit of
/forking before a topic switch, not after.
Run /insights weekly. The signal the first time is usually obvious (one or two big drags). Subsequent weeks surface subtler drags. The workflow quality curve rises whenever you're willing to treat the report as a checklist, not a dashboard.
Visual workflow — what to reach for when
Day 0 through Day 7 — a ramp plan
A short onboarding ramp, assuming you just finished the install primer:
Day 0 (install + first session). You already did the install primer. First session: claude in any repo, /powerup walkthrough, then ask Claude to read the repo and produce a one-page summary. Close.
Day 1 (CLAUDE.md). In the project you care about most, open CLAUDE.md and write it. Stack, purpose, commands, code-style preferences, anti-patterns. Ten to fifteen minutes. Commit it.
Day 2 (plan mode as default). For every session: /plan first. Read the plan, push back, approve. Do this even for tasks you think are trivial — the plan is also the session log.
Day 3 (first hook). Pick the most annoying repeated approval — for me it was npm test — and write a hook that auto-approves it. Live without the prompt for a day and feel the difference.
Day 4 (/insights first run). Run /insights. Read the HTML report. Pick the top single friction point. Fix it (CLAUDE.md edit, new hook, or new skill).
Day 5 (context hygiene). Practice the /fork habit. Next time you're tempted to "just quickly ask something unrelated" in the main thread, /fork instead. Do this three times and it becomes muscle memory.
Day 6 (first /loop). Set up a real /loop — the classic is /loop 10m run tests and summarize failures. Leave it running during a real work session. Feel the ambient awareness.
Day 7 (first /batch or /simplify). When your next feature lands, /simplify it. When your next refactor is bigger than one file, /batch the plan. The ceiling of what's worth doing with Claude Code rises fast once you're using the multi-agent commands.
By Day 7 you've installed, adopted plan-first, written CLAUDE.md, added a hook, run /insights, practiced context hygiene, set up a loop, and done your first multi-agent review. That's the curve from "I have it installed" to "I have an operating model." Past this point, the marginal improvements come from iterating on what /insights shows you each week — skills, rules, and memory grow as your work does.
Common pitfalls (the ones that cost the most time)
- "I dumped everything into one giant CLAUDE.md." Split by concern — always-true facts stay in CLAUDE.md, file-scoped constraints become rules, repeatable workflows become skills, learned facts become memory. Progressive disclosure beats prompt hoarding.
- "I keep approving the same prompt 30 times." That's a hook you haven't written yet.
- "My session is now a mess and I can't remember what we decided."
/rewindto a checkpoint, or/forkto archive the current thread and continue clean. - "Parallel agents keep colliding." That's the worktree pattern.
claude --worktree <name>for every parallel task. - "I started in plan mode but then forgot and Claude edited 800 lines." Set
"defaultMode": "plan"in settings. Let the friction happen; it's the cheap end of the error spectrum.
Related reading
- AI Terminal Kickstart — the beginner primer. Install Claude Code, Codex, ChatGPT CLI, GitHub Copilot CLI, Netlify CLI. Prereq for everything in this post.
- ISP surveillance + DIY monitoring — tangentially related: if you're running Claude Code in a small-business context, the network isolation from that post matters for what Claude sees.
- Isolating IoT Devices On A Consumer Router — same theme, different layer — segment your network before you segment your sessions.
- Claude MD Generator — companion tool: generates a CLAUDE.md scaffold from basic project inputs. Use it to bootstrap the file described in this post.
Fact-check notes and sources
Primary sources for the workflow patterns in this post:
- Anthropic Engineering — Harness design for long-running application development, Effective context engineering for AI agents, Equipping agents for the real world with Agent Skills, Auto Mode: a safer way to skip permissions, How Anthropic teams use Claude Code.
- Official Claude Code docs — Slash commands, Scheduled tasks, Subagents, Remote Control, Memory, Changelog.
- Boris Cherny (Claude Code creator) — How I use Claude Code and the writeup at howborisusesclaudecode.com. Also covered in Every's podcast, InfoQ, and Pragmatic Engineer — How Claude Code is built.
- Andrej Karpathy — "Claude agents on the left, IDE on the right" + Business Insider analysis.
- Practical field notes — The Claude Code Handbook at freeCodeCamp, Paddo — 10 tips from inside the Claude Code team, CodingScape — how Anthropic engineering teams use Claude Code every day.
- Context hygiene toolkit — Mastering /btw, /fork, and /rewind.
- Q1 2026 feature surface — MindStudio Q1 2026 update roundup + Q1 2026 roundup part 2.
- CLAUDE.md discipline — Arize: CLAUDE.md best practices from prompt learning, Walturn: prompt engineering for Claude.
- Skills + rules + memory breakdown — Save Hours: Stop Repeating Yourself to Claude.
- Security / boundaries — MintMCP — Claude Code security, Claude Code playbook — security best practices.
This post is informational, not engineering consulting advice. The slash commands, flags, and settings referenced reflect Claude Code's Q1 2026 surface (version 2.1.91+); re-verify against the official changelog before depending on any specific command in production. Mentions of Anthropic, Claude Code, OpenAI, GitHub, Netlify, and all third-party authors / publications are nominative fair use. No affiliation is implied.