capture-context
Capture Context
Make the plan file self-contained so implementation can proceed after clearing context.
Step 1: Locate the Plan File
Find the active plan file path from the plan mode system prompt. Read the file to determine whether it already has content.
If no plan file exists, create one at the path specified by plan mode.
Step 2: Scan the Conversation
Extract information across these categories. Skip any category where nothing relevant was discussed.
- Work State — Commits made, branches created, PRs opened. Current branch and its relationship to the base branch. Build and test status. What is implemented vs. what remains.
- Codebase Findings — Files explored and their relevance. Reference implementations discovered (exact file paths and line ranges). Patterns and utilities identified for reuse. Architecture or module boundaries understood.
- Decisions and Rationale — Approaches chosen and why. Alternatives considered and why they were rejected. Constraints that shaped the design. User preferences expressed during discussion.
- Requirements Refinement — How the original request evolved through discussion. Scope narrowed or expanded. Acceptance criteria clarified. Dependencies identified.
- Open Questions — Unresolved items that need attention during implementation. Assumptions made that should be verified. Risks or unknowns flagged.
If the scan yields nothing substantial, tell the user and stop.
Step 3: Write the Session Context Section
Write a ## Session Context section in the plan file.
Placement:
- If the plan already has content, insert as the first section before implementation details
- If the plan already has a
## Session Contextsection, merge new information into it. Prefer newer information when it conflicts with earlier content. Deduplicate and preserve the category structure. - If the plan is empty, write it as the first section
Writing guidelines:
- Use concrete details: file paths, commit hashes, branch names, line numbers
- Keep each item to 1-3 lines. This is a reference document, not a narrative.
- Preserve reasoning: "chose X because Y" is more valuable than just "chose X"
- Include code snippets only when essential for understanding (key signatures, critical types)
- When an earlier approach was abandoned, capture only the final state. Mention the abandoned approach only if its rationale matters for implementation.
Step 4: Verify Completeness
Re-scan the conversation with these checks:
- Resumability — Could someone reading only this plan start implementing without the original conversation?
- Decision coverage — Does every non-obvious choice in the implementation plan have its rationale captured?
- No orphaned references — Is every file, branch, or PR mentioned in implementation steps grounded in Session Context?
Update the section if gaps are found.
Step 5: Present Summary
Tell the user what was captured: a brief count of items per category and any notable gaps.
Rules
- Capture knowledge, not logistics. "We debugged for 20 minutes" is irrelevant; the resulting finding is relevant.
- Omit context any developer would already know or that the implementation steps make obvious.
- Do not duplicate information already present in other plan sections.
- Do not capture secrets, credentials, or environment-specific details.
More from tobihagemann/turbo
find-dead-code
Find dead code using parallel subagent analysis and optional CLI tools, treating code only referenced from tests as dead. Use when the user asks to \"find dead code\", \"find unused code\", \"find unused exports\", \"find unreferenced functions\", \"clean up dead code\", or \"what code is unused\". Analysis-only — does not modify or delete code.
30simplify-code
Run a multi-agent review of changed files for reuse, quality, efficiency, and clarity issues followed by automated fixes. Use when the user asks to \"simplify code\", \"review changed code\", \"check for code reuse\", \"review code quality\", \"review efficiency\", \"simplify changes\", \"clean up code\", \"refactor changes\", or \"run simplify\".
23smoke-test
Launch the app and hands-on verify that it works by interacting with it. Use when the user asks to \"smoke test\", \"test it manually\", \"verify it works\", \"try it out\", \"run a smoke test\", \"check it in the browser\", or \"does it actually work\". Not for unit/integration tests.
22finalize
Run the post-implementation quality assurance workflow including tests, code polishing, review, and commit. Use when the user asks to \"finalize implementation\", \"finalize changes\", \"wrap up implementation\", \"finish up\", \"ready to commit\", or \"run QA workflow\".
22self-improve
Extract lessons from the current session and route them to the appropriate knowledge layer (project AGENTS.md, auto memory, existing skills, or new skills). Use when the user asks to \"self-improve\", \"distill this session\", \"save learnings\", \"update CLAUDE.md with what we learned\", \"capture session insights\", \"remember this for next time\", \"extract lessons\", \"update skills from session\", or \"what did we learn\".
22evaluate-findings
Critically assess external feedback (code reviews, AI reviewers, PR comments) and decide which suggestions to apply using adversarial verification. Use when the user asks to \"evaluate findings\", \"assess review comments\", \"triage review feedback\", \"evaluate review output\", or \"filter false positives\".
22