review-recent-sessions
Review Recent Sessions
Review multiple recent sessions from the current project directory to identify cross-session patterns.
Prerequisites
- The
ed3d-extending-claudeplugin must be installed. - The
ed3d-session-reflectionplugin must be installed (provides theconversation-revieweragent andreduce-transcript.pyscript). - The current session's transcript path must be available (to determine the project directory).
Invocation
The user may invoke this as:
/review-recent-sessions— review last 5 sessions/review-recent-sessions 10— review last 10 sessions
Steps
1. Find the project's session directory
Use the current session's transcript path to determine the project directory. The transcript path looks like:
~/.claude/projects/-Users-ed-Development-.../SESSION_ID.jsonl
The directory containing it is the project's session directory.
If you cannot determine the project directory, ask the user.
2. List recent sessions
Find the most recent JSONL files in the project directory, sorted by modification time, limited to the requested count (default 5).
ls -t "<project_session_dir>"/*.jsonl | head -<count>
Exclude the current session's transcript (the user doesn't want to review the review session itself).
If fewer than 2 sessions are found, tell the user there aren't enough sessions to do a cross-session review and suggest using /review-session instead.
3. Reduce all transcripts
Create a working directory:
mkdir -p /tmp/session-review-batch
For each session, run the reduction script:
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/reduce-transcript.py" "<session.jsonl>" "/tmp/session-review-batch/reduced-<N>.txt"
This can be done in a single bash command with a loop.
4. Dispatch parallel reviewers
For each reduced transcript, dispatch a conversation-reviewer agent in the background:
Transcript path: /tmp/session-review-batch/reduced-N.txt Write your findings to: /tmp/session-review-batch/findings-N.md
Read the transcript, analyze it, and write your findings following your output format.
Dispatch ALL reviewers in a single message to maximize parallelism. Tell the user you've dispatched N reviewers and are waiting for results.
5. Synthesize findings
Once all reviewers complete, dispatch a general-purpose Sonnet agent to synthesize:
Read all findings files in /tmp/session-review-batch/findings-*.md
Produce a synthesis that identifies:
-
Recurring patterns — issues that appear across multiple sessions. These are the highest-value findings because they represent systematic problems.
-
Progression — is the user getting better or worse at prompting over time? Is the agent handling certain tasks better or worse?
-
Highest-impact recommendations — across all sessions, which recommendations would have the biggest effect? Prioritize:
- CLAUDE.md changes (things the user keeps correcting)
- Hooks (behaviors that should be enforced automatically)
- Skills/workflows (multi-step processes that keep being done manually)
-
Session-specific highlights — any single-session finding that's particularly noteworthy even if it didn't recur.
Write your synthesis to /tmp/session-review-batch/synthesis.md
Format as Markdown. Be specific — reference which sessions showed which patterns. Be concise — this is a summary, not a repetition of individual findings.
6. Present synthesis
Read /tmp/session-review-batch/synthesis.md and present the full synthesis to the user.
If any individual session findings are particularly interesting, mention that the user can find per-session details in /tmp/session-review-batch/findings-N.md.
More from ed3dai/ed3d-plugins
functional-core-imperative-shell
Use when writing or refactoring code, before creating files - enforces separation of pure business logic (Functional Core) from side effects (Imperative Shell) using FCIS pattern with mandatory file classification
110playwright-debugging
Use when Playwright scripts fail, tests are flaky, selectors stop working, or timeouts occur - provides systematic debugging approach for browser automation issues
26researching-on-the-internet
Use when planning features and need current API docs, library patterns, or external knowledge; when testing hypotheses about technology choices or claims; when verifying assumptions before design decisions - gathers well-sourced, current information from the internet to inform technical decisions
23creating-an-agent
Use when creating specialized subagents for Claude Code plugins or the Task tool - covers description writing for auto-delegation, tool selection, prompt structure, and testing agents
18writing-for-a-technical-audience
Use when writing documentation, guides, API references, or technical content for developers - enforces clarity, conciseness, and authenticity while avoiding AI writing patterns that signal inauthenticity
18using-generic-agents
Use to decide what kind of generic agent you should use
18