exploration-session-brief
Exploration Session Brief (Interactive Co-Authoring)
Note: This skill runs fully interactively via Claude — no script needed.
execute.pyis a planned batch-mode convenience wrapper that hasn't been built yet, but the core skill works now. The intake-agent provides an alternative agentic dispatch path.
This skill provides a structured, 3-stage interactive workflow for generating an Exploration Session Brief. Guide the user through each stage in sequence — do not skip ahead or dump the full brief at once.
Important Note for Agents: Do NOT passively run a bash script or dump a massive block of markdown. You must guide the user through the following 3 stages.
Stage 1: Context Gathering
Your goal is to understand the boundaries of the exploration before drafting anything. Ask all three questions together in a single message:
-
Domain: What category best fits this exploration?
- Software feature or system — new capability, redesign, or technical spike
- Business process — workflow, approval flow, operations improvement
- Risk or compliance — mitigation strategy, audit finding, policy gap
- Research or strategy — market analysis, competitive review, roadmap decision
- Other — describe it briefly
-
Trigger: What specific event, pain point, or decision caused us to start this session right now?
-
Raw material: Do you have any notes, transcripts, screenshots, or prior docs to share? (You can brain-dump freely — messy is fine.)
Wait for the user's response. If any answer is too sparse to proceed (e.g., one-word domain, no trigger explained), ask one targeted follow-up before moving to Stage 2. Do not proceed until you have a clear trigger and at least one concrete detail.
Stage 2: Section-by-Section Refinement
Build the brief iteratively — do not write the entire document in one pass.
-
Propose the Outline: Based on the domain from Stage 1, propose a section list using the appropriate template below. Present it as a numbered list and ask the user: "Does this structure fit? Anything to add or remove?"
Domain Suggested sections Software feature/system Problem Statement · Stakeholders · Current Behavior · Desired Behavior · Constraints · Open Questions Business process Problem Statement · Stakeholders · Current Process · Pain Points · Desired State · Constraints · Open Questions Risk/compliance Risk Description · Affected Parties · Current Exposure · Mitigation Options · Constraints · Open Questions Research/strategy Research Question · Context · What We Know · What We Don't Know · Success Criteria · Open Questions -
Curate: Apply any changes the user requests. If they want a custom section, add it. Do not argue for the template.
-
Draft section by section: For each section, write a 2–5 sentence draft using only information from Stage 1. Present it and ask: "What should we keep, cut, or change?" Apply edits before moving to the next section. Mark anything inferred (not stated by user) as
[UNCONFIRMED].
Stage 3: Reader Testing (Blind Spot Analysis)
Once all sections are drafted and approved, stress-test the brief before writing it out:
- Pick the most likely first reader — the person who will act on this brief (e.g., an engineer building a spec, a manager approving budget, a team changing a process).
- Predict exactly 3 questions that reader would ask after reading the brief if they had never been part of this session. Make the questions specific to this brief's content, not generic.
- Present the 3 questions to the user: "If [reader] reads this, they'll likely ask: [Q1], [Q2], [Q3]. Does the brief answer these? Should we add answers inline or list them in
## Open Questions?" - Apply whatever the user decides.
Anti-Hallucination Rules
- Do NOT assume the output will be a software product unless the user says so. Ensure language remains agnostic (e.g., use "Solution" instead of "App").
- Clearly demarcate proven facts from assumptions using
[CONFIRMED]and[UNCONFIRMED]markers. - Never fake user personas or edge cases; derive them strictly from the user's Context Gathering dump.
Final Output Destination
Write the approved, refined markdown content to: exploration/sessions/session-brief.md (or a timestamped equivalent).
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26