← Back to Blog

The Weekly /insights Iteration — 30 Minutes Every Monday To Make Claude Code Faster For You

The Weekly /insights Iteration — 30 Minutes Every Monday To Make Claude Code Faster For You

Part of the Claude Code workflow series. Start with the install primer; then what to do after install; then this post to turn the one-time "that's neat" /insights report into a weekly ritual that actually moves your workflow.

/insights is the most powerful Claude Code command that people run once and never again. Anthropic ships it as a built-in; the report is genuinely useful; most users open it exactly once, notice their token usage, close the tab, and move on. That's not /insights failing — it's the user failing to run the loop.

The tier-up is mechanical: 30 minutes every Monday morning, read the report, fix the top friction point, update CLAUDE.md / hooks / skills accordingly. Six weeks of this and your workflow is measurably faster than when you started. I'll walk through the actual steps.

What the report contains

Running /insights generates an HTML file (usually ~/.claude/usage-data/report.html). Open it. The sections that matter:

Projects and sessions. Per-project session count, token consumption, average session length, abandonment rate.

Tool usage. Which tools you call most often, which ones you call rarely. Useful for spotting "I never use subagents, maybe I should" gaps.

Friction points. The most actionable section. Highlights:

  • Sessions you restarted (often because you went in a wrong direction)
  • Prompts you typed the same thing twice (you re-explained something)
  • Approvals that fired >10 times in one session (a hook candidate)
  • Context-window exhaustion events (your CLAUDE.md is too big, or you're not using /fork)
  • Long sessions with low throughput (usually a decomposition failure)

Personalized suggestions. What's working, what's hindering, quick wins, ambitious workflows. Treat these as prompts, not prescriptions — read them, translate into action items, don't just follow blindly.

The first time you open the report, reading through every section takes ~20 minutes. Subsequent weeks take 10–15 because you know what you're looking for.

The Monday ritual

A concrete workflow for you to copy, verbatim, starting next Monday:

7:55am — coffee.

8:00am — open a fresh Claude Code session in any repo. Run /insights. Open the HTML.

8:02am — read the friction points section first. Not the pretty charts. The charts are status; the friction section is change. Pick the top 3 items by time cost (weight each by "how much time did this cost me last week").

8:10am — for each of the top 3, categorize the fix:

  • "I re-explained X" → CLAUDE.md or rule. The knowledge wasn't persistent; make it persistent.
  • "I approved X 20 times" → hook. A 10-line hook eliminates the prompt.
  • "I abandoned the session after an unrelated tangent" → /fork discipline. Train yourself to /fork before the tangent, not after.
  • "Context window filled up after 40 turns" → /compact as a habit, or a shorter CLAUDE.md.
  • "Prompt kept missing what I wanted" → prompt template in a skill. If you're writing "do X with Y constraint, output as Z" every time, that's a skill.

8:20am — fix the #1 item. Not all three — just the top one. You'll work on the other two over the week, but Monday's 30 minutes funds the fix for #1 specifically.

8:30am — commit the fix (whether it's a CLAUDE.md line, a new hook, a new skill, or a new rule).

8:30am — close the report, start actual work for the day.

Half an hour a week. Six fixes a month. By week 12 you've eliminated 10 of your top friction points and added 3 new skills.

What the first four weeks usually look like

From my own data + a handful of people who've tried this:

Week 1 — the obvious hook. Every new Claude Code user is approving npm test (or equivalent) dozens of times. Week 1's fix is almost always a hook that auto-approves the known-safe Bash commands.

Week 2 — CLAUDE.md entry for the repeated question. You noticed you kept telling Claude "the deploy command is node deploy-site.mjs" every session. Add it to CLAUDE.md's Commands section. Free hour back each week.

Week 3 — first skill. You realized you do the same multi-step setup for every new feature. Package it as a skill. First invocation feels weird ("I'm using a slash command I made?"), second invocation feels great, third invocation you stop thinking about it.

Week 4 — /fork discipline. The /insights report shows you abandoned two sessions after unrelated tangents. You train yourself to /fork first. Sessions get cleaner. You stop losing 15-minute tangent rabbit-holes.

After week 4 the improvements get more specific to your situation. Maybe you discover you spend a lot of time on dependency audits — that becomes a /schedule. Maybe you realize your subagents aren't catching security issues — you add a security-role subagent to the standard review flow.

The shape of the curve matters more than the specific fixes: the report becomes less dramatic over time. Week 1 might show three 5-hour-a-week friction points; week 12 might show a single 45-minute friction point. That's success. You're not running out of problems; you're running out of high-impact ones.

Reading the suggestions critically

/insights generates personalized suggestions. Some are sharp. Some are boilerplate. How to tell them apart:

Sharp suggestion: "You've spent 4.2 hours this month approving git push to feature branches. Consider a hook that auto-approves pushes to branches matching feature/* while still prompting on pushes to main." Specific numbers, specific action, specific exception.

Boilerplate suggestion: "Consider using skills more." No numbers, no specifics, no guidance on which repeated action would benefit.

Ignore boilerplate. Act on sharp. The signal-to-noise improves when you've been running the loop for a month — the model has more session data to base suggestions on.

When /insights shows nothing actionable

Some weeks the report is quiet. No major friction, no standout patterns. Don't force it.

Instead, use the quiet week to:

  • Skim the ambitious workflows section — these are patterns you haven't tried yet.
  • Audit your active /schedule and /loop list. Delete the ones you've forgotten about.
  • Read a recent Anthropic engineering post you haven't gotten to yet (the harness design essay and agent skills essay are the two highest-leverage reads).
  • Close stale worktrees and sessions you've got sitting around.

The point of the Monday ritual is the ritual, not the fix. Some weeks the fix is maintenance rather than improvement.

What to share with your team

If you're doing this solo, you don't need to share anything. If you're on a team that uses Claude Code:

  • Share the hooks. Check .claude/hooks.json into the repo. Everyone benefits from the same auto-approval allowlist.
  • Share the skills. .claude/skills/* goes in the repo too.
  • Don't share your /insights report. It contains your per-session data; it's personal. But do share the fixes that came out of it.
  • Run a quarterly team review. Everyone brings their top 3 friction points from the last 12 weeks. Often three people have the same one and didn't know. That's the highest-value fix you'll find all quarter.

What this buys you

At 30 minutes per week × 52 weeks = 26 hours per year. What you get back, from personal data across ~six months:

  • 10–15 eliminated recurring approvals (hook-replaceable).
  • 20–30 CLAUDE.md / rule additions that compound.
  • 5–10 new skills that replace multi-step manual workflows.
  • Noticeably cleaner sessions — fewer tangents, fewer abandonments, fewer /rewind invocations.
  • The habit itself — treating your AI workflow as something you measure and improve, not something that just exists.

That last one is the main prize. People who run the loop pull ahead of people who don't over 6–12 months, and the gap keeps widening.

Related reading

Fact-check notes and sources

Informational, not engineering consulting advice. The /insights report format and contents reflect Q1 2026 behavior and may evolve. Verify against the official changelog. Mentions of Anthropic, Claude Code, linked publications are nominative fair use.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026