user-story-capture
User Story Capture
Derive structured user stories and acceptance criteria from exploration session captures.
Usage
python3 .agents/skills/user-story-capture/scripts/execute.py \
--input <file> [<file2>...] \
--format <standard|gherkin> \
--output <output_file.md>
Formats:
standard(default):As a [user type], I want [goal], so that [benefit]— with priority table and gaps.gherkin: Standard +Given / When / ThenAcceptance Criteria blocks per story.
Flags:
--input PATH [PATH ...]: Session brief, BRD draft, prototype notes, or prior captures--output PATH: Destination file (default:exploration/captures/user-stories-draft.md)--format FORMAT: Output format (default:standard)
Interactive Co-Authoring Workflow
When invoked interactively, follow this 3-stage pattern. Do not dump a full story list at once.
Stage 1: Context Gathering
Ask all three questions in a single message before generating anything:
- Input files: Which source documents should I work from? (Check
exploration/— list what you find: session brief, BRD draft, prototype notes.) If no files exist, stop and ask for input before proceeding. - Primary actor: Which user role, system actor, or job-to-be-done is the highest priority for the first implementation slice? (Use role-neutral language — e.g., "the person approving requests", "the agent running evals", not just "the user".)
- Out-of-scope: Are there any actors or workflows we should explicitly exclude from this story set?
- Format: Should acceptance criteria use standard format (
As a / I want / So that) or Gherkin (Given / When / Then)? Default to standard unless Gherkin is requested.
After the user responds: read each input file they identify.
Stage 2: Iterative Refinement
Build the backlog in layers — do not jump straight to full Gherkin blocks.
-
Outline first: Based on the input files and primary actor, present a numbered list of lightweight story titles (one line each, no ACs yet). Ask: "Which of these should we keep, cut, or merge for the first slice?"
-
Curate: Apply changes. Mark any story derived from unclear or inferred source material as
[UNCONFIRMED]. -
Draft approved stories: For each kept story, write the full format:
- Standard:
As a [actor], I want [goal], so that [benefit]. - Gherkin: Add
Given / When / ThenAC blocks after the story statement.
Gherkin format rules:
Given= precondition or system state before the action (what is already true)When= the single action or event the actor performsThen= the observable, testable outcome (what changes or appears)- One
Whenper scenario. UseAndfor additionalGivenorThenclauses.
Present each story and ask: "Accurate? Anything to add or change?" Apply edits before the next story.
- Standard:
Stage 3: Reader Testing (Test-Driven ACs)
After all approved stories are drafted:
- For each priority story (top 3 if there are many), predict exactly 2 edge cases or failure modes that a QA engineer would test but that the current ACs do not cover. An edge case must be specific and testable — not generic ("what if it fails?") but concrete ("what if the file is missing at sync time?").
- Present the gaps: "Story [N] doesn't handle: [edge case 1], [edge case 2]. Should we add scenarios for these?"
- If yes: add
Given / When / Thenblocks for the confirmed edge cases. Mark inferred edge cases[UNCONFIRMED]until the user confirms they are real scenarios. - Collect all unresolved questions in a
## Story Gapssection at the end.
Anti-Hallucination Rules
- Do NOT invent user types, goals, or benefits not described in source captures.
- Do NOT fabricate edge cases in Gherkin AC without evidence from input files or explicit user confirmation.
- Mark inferred stories and scenarios
[UNCONFIRMED]— only promote to[CONFIRMED]after human sign-off. - Do NOT proceed without input files — stories generated from nothing are pure hallucination.
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26