note-improvement
Note Improvement
Capture improvement opportunities discovered during work so they don't get silently dropped. Appends to a project-level .turbo/improvements.md file that serves as a backlog of actionable ideas.
Step 1: Determine Project Root
Find the nearest .git directory or project root. The improvements file lives at .turbo/improvements.md relative to the project root.
Step 2: Identify the Improvement
Gather from context or $ARGUMENTS:
- What: One-line summary of the improvement
- Type: One of
direct,investigate, orplan— see criteria below - Category: One of
refactor,performance,reliability,readability,testing,docs,dx(developer experience), orfeature - Where: File path(s) and/or area of the codebase affected
- Why: Brief rationale — what's the benefit?
Type criteria
- direct — Clear scope and a known approach, ready to apply via
/implement. - investigate — A symptom that needs root-cause analysis first: unclear root cause, performance question, intermittent bug, "something feels off".
- plan — Everything else: the approach warrants writing down before implementing (multi-file refactor, test additions, feature work).
When the criteria above clearly select one value, use it. Otherwise, use AskUserQuestion to confirm; default to plan if the user declines to choose.
Step 3: Append to File
Read .turbo/improvements.md if it exists. Create it with the header below if it doesn't.
File header (only when creating new):
# Improvements
Out-of-scope improvement opportunities captured during work sessions. Review periodically and pull items into active work when appropriate.
Entry format:
### <one-line summary>
- **Type**: <direct | investigate | plan>
- **Category**: <category>
- **Where**: `<file path or area>`
- **Why**: <brief rationale>
- **Noted**: <YYYY-MM-DD>
Append the new entry at the end of the file.
Step 4: Confirm
Tell the user the improvement was noted and where the file is.
Rules
- Deduplicate before appending: check for a similar entry and update it in place when one exists. When the existing entry predates the Type field, add a Type line while updating.
- When updating an existing entry tagged with the legacy values
trivialorstandard, rewrite the Type todirectorplanrespectively so the file converges on current vocabulary. - Keep entries concise — 3-5 lines max per entry. These are backlog items, not specs.
- Record only; leave action to the user, who decides when to address it.
- When the project has no
.turbo/directory, useAskUserQuestionto confirm the location before creating one.
More from tobihagemann/turbo
find-dead-code
Find dead code using parallel subagent analysis and optional CLI tools, treating code only referenced from tests as dead. Use when the user asks to \"find dead code\", \"find unused code\", \"find unused exports\", \"find unreferenced functions\", \"clean up dead code\", or \"what code is unused\". Analysis-only — does not modify or delete code.
30simplify-code
Run a multi-agent review of changed files for reuse, quality, efficiency, and clarity issues followed by automated fixes. Use when the user asks to \"simplify code\", \"review changed code\", \"check for code reuse\", \"review code quality\", \"review efficiency\", \"simplify changes\", \"clean up code\", \"refactor changes\", or \"run simplify\".
23smoke-test
Launch the app and hands-on verify that it works by interacting with it. Use when the user asks to \"smoke test\", \"test it manually\", \"verify it works\", \"try it out\", \"run a smoke test\", \"check it in the browser\", or \"does it actually work\". Not for unit/integration tests.
22finalize
Run the post-implementation quality assurance workflow including tests, code polishing, review, and commit. Use when the user asks to \"finalize implementation\", \"finalize changes\", \"wrap up implementation\", \"finish up\", \"ready to commit\", or \"run QA workflow\".
22self-improve
Extract lessons from the current session and route them to the appropriate knowledge layer (project AGENTS.md, auto memory, existing skills, or new skills). Use when the user asks to \"self-improve\", \"distill this session\", \"save learnings\", \"update CLAUDE.md with what we learned\", \"capture session insights\", \"remember this for next time\", \"extract lessons\", \"update skills from session\", or \"what did we learn\".
22evaluate-findings
Critically assess external feedback (code reviews, AI reviewers, PR comments) and decide which suggestions to apply using adversarial verification. Use when the user asks to \"evaluate findings\", \"assess review comments\", \"triage review feedback\", \"evaluate review output\", or \"filter false positives\".
22