design-doc
Design Doc Writer
Synthesize a design discussion into a complete, structured design document with agent-ready user stories. The design sections capture the technical context — architecture, data contracts, integration points. The user stories are the primary deliverable: each one is grounded in the design work and precise enough for an AI agent to implement without asking follow-up questions.
The Job
- Review the conversation history for all design discussion
- Actively search the codebase (using glob/grep) to find relevant existing files, architectural patterns, and verify integration points. Do not guess file paths.
- Identify gaps — things not yet discussed that a complete design doc needs
- Ask targeted clarifying questions for only the genuinely unresolved gaps
- Write the design doc following the structure below
- Save it and tell the user
Step 1: Extract & Explore
Before asking anything, mine the conversation for:
- The problem being solved and why it matters
- Goals that were stated or implied
- Constraints and non-goals that came up
- Alternatives that were discussed and reasons they were rejected
- Key decisions and their rationale
- Any open questions or unresolved tradeoffs the user explicitly flagged
Crucially, explore the codebase:
- Read
docs/vision/vision.md(or equivalent) if it exists — this is the lens through which all design choices should be evaluated. If the design conflicts with the vision, surface it in Open Questions. - Locate the exact file paths where changes will be made.
- Identify existing data models, API conventions, and utility functions that should be reused.
- Verify that the proposed integration points actually exist and note their current signatures.
If the conversation and your codebase exploration cover most elements, proceed to writing. Only ask about genuinely unresolved gaps.
Step 2: Clarifying Questions (only if needed)
Ask only what is truly missing and necessary to write a complete doc. Batch questions into a single message. Don't ask about things that can be reasonably inferred from the conversation or discovered via codebase search.
Good candidates for questions:
- Success metrics (if none were mentioned)
- Concrete example of expected output or behavior
- Deployment or configuration details not discussed
- Whether there's a vision doc or related prior art to align with
Bad candidates (skip these — infer from context):
- Restating what was already discussed
- Generic questions that apply to any design
- Asking the user to repeat themselves
- Details that can be found by reading the codebase
If you have 3 or fewer questions, ask them directly. If more, prioritize the most blocking ones.
Format Questions Like This:
Provide lettered options so the user can respond quickly (e.g., "1A, 2C, 3B"):
1. What is the primary goal?
A. Improve user onboarding experience
B. Increase user retention
C. Reduce support burden
D. Other: [please specify]
2. What is the scope?
A. Minimal viable version
B. Full-featured implementation
C. Just the backend/API
D. Just the UI
Step 3: Write the Design Doc
Use the structure below. Every section is required. If a section genuinely doesn't apply, include it with a one-line explanation of why it was omitted rather than silently skipping it.
Write concretely — use real names, exact file paths, strict data structures, and examples from the discussion and codebase. Avoid abstract prose when specifics are available. This document must be highly actionable for an AI agent to implement.
Design Doc Structure
# [Feature / System Name]
**Author:** [from context or leave blank]
**Date:** [today's date]
**Status:** Draft
---
## Problem Statement
[What's broken, missing, or painful today? Concrete — not "we want to add X." Answer: what happens without this?]
---
## Goals
1. [Specific, measurable goal]
2. ...
---
## Non-Goals
- [Explicit out-of-scope items and why]
---
## Success Metrics
[How will we know this succeeded? Quantifiable where possible: latency targets, error rate reduction, adoption numbers.]
---
## Design Principles
[Named principles that guided decisions and can serve as tie-breakers for future ambiguity. e.g., "Fail fast over silent degradation."]
---
## Vision Alignment
[If a vision doc exists: 2–4 sentences explaining how this design advances the product vision. Name the specific vision goals or principles it supports. If any aspect of the design is in tension with the vision, state it explicitly and explain why the tradeoff is justified. If no vision doc exists, omit this section.]
---
## Alternatives Considered
### [Option A]
[Description] — **Rejected because:** [reason]
### [Option B]
[Description] — **Rejected because:** [reason]
---
## Architecture Overview
[Component diagram (ASCII) or component list. Reader should understand the system model without reading all prose.]
---
## API & Data Contracts
[Exact TypeScript interfaces, Pydantic models, GraphQL schemas, or REST JSON payloads. Downstream agents need strict contracts, not just descriptions. What gets stored, where, in what format?]
---
## Integration Points
[Which existing files, APIs, events, or hooks are modified. Be extremely specific — provide exact file paths and function signatures.]
---
## Sequence / Flow Walkthrough
[Step-by-step for the critical path. ASCII or numbered sequence for async flows.]
---
## Example Output
[What does the user actually see? JSON response, CLI output, UI state, or file content. Use a real example.]
---
## Configuration
[Env vars, feature flags, or tunables — with defaults, types, and descriptions.]
---
## Testing Strategy
[Not just "we'll write unit tests." Name specific test cases, edge cases, and integration scenarios, including which test files to create or modify.]
---
## Observability & Logging
[What should be logged to enable troubleshooting without a debugger attached? Cover:
- **Key operations**: What events or state transitions should emit a log? (e.g., "job started", "record saved", "external call returned")
- **Error paths**: What failures must be logged with enough context to diagnose the cause? (e.g., include relevant IDs, inputs, response codes)
- **Log levels**: Which statements are `debug` (verbose, dev-only), `info` (normal operation), `warn` (unexpected but recoverable), `error` (needs attention)?
- **Format**: Structured (JSON with fields) or plain text? Any required fields (trace ID, user ID, timestamp)?
Avoid vague statements like "add appropriate logging." Name the specific operations and the information each log must include.]
---
## Edge Cases & Failures
[For each failure mode: how is it detected, and what's the mitigation?]
| Failure | Detection | Mitigation |
|---------|-----------|------------|
| ... | ... | ... |
---
## Risks
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| ... | ... | ... | ... |
---
## Open Questions
[Unresolved decisions or unknowns. Their presence signals intellectual honesty.]
1. [Question + context on what's blocking resolution]
---
## Context Required for Implementation
[A bulleted list of exact file paths that an implementing agent must read to understand the existing system and constraints before making changes.]
- `src/example/file.ts`
- ...
---
## User Stories
[PREFIX] is a short 3-10 letter abbreviation derived from the feature name.
Minimize the number of user stories while ensuring each story is small enough for an AI agent to complete in one focused session. Avoid over-fragmenting into tiny stories, but do not combine unrelated complex tasks.
Each story must be grounded in the design sections above — reference exact file paths, data contracts, and integration points. An implementing agent should be able to complete the story without reading the rest of this document.
**Important — Documentation Requirements:**
- For structural/architectural changes: require updating design docs to reflect added, updated, or deleted features.
- For CLI tools: require documenting CLI usage (flags, commands, help text).
- For user-facing features: require updating README or usage guides.
- For API changes: require updating API docs or type documentation.
### [PREFIX]-001: [Title]
**Description:** As a [user], I want [feature] so that [benefit].
**Context:**
- Files to read: `path/to/relevant/file.ts`
- Relevant data contracts: [reference the specific interface/schema from API & Data Contracts section]
**Acceptance Criteria:**
- [ ] [Specific, verifiable criterion referencing exact file paths and function names]
- [ ] [Another criterion]
- [ ] Typecheck/lint passes
- [ ] **[Logic/Backend]** Write unit tests covering [specific scenarios]
- [ ] **[UI stories only]** Verify in browser using dev-browser skill
- [ ] **[Documentation]** Update [specific doc — name the file] if applicable
### [PREFIX]-002: [Title]
...
---
## Future Extensions
[Ideas deferred from this design, with rationale for deferral. Shows this is part of a roadmap.]
Step 4: Save and Report
Save the design doc as docs/design/[slug].md where the slug is a short kebab-case name for the feature (e.g., docs/design/auth-token-refresh.md). If docs/design/ doesn't exist, create it.
After saving, tell the user:
- The file path
- A 2–3 sentence summary of what was captured
- Any open questions remaining that they should resolve before the doc is ready for review
Revision Workflow
If a review file exists (e.g., docs/design/review-[slug].md produced by the design-doc-reviewer skill), use it to revise the design doc. Do not blindly adopt all feedback — critically assess each item against the original design intent, constraints, and conversation context.
Phase 1: Triage Feedback
Before making any changes, evaluate every piece of reviewer feedback and assign a disposition:
| Disposition | Meaning | Action |
|---|---|---|
| Accept | Feedback is valid and the suggested fix is appropriate | Apply the fix as suggested |
| Accept (Alt) | Valid concern, but a different solution fits better | Apply your alternative solution |
| Reject | Feedback conflicts with explicit constraints, design intent, or conversation context | Do not change; document reasoning |
| Defer | Valid point but requires user input or is out of scope for this revision | Move to Open Questions |
When triaging, consider:
- Does this feedback conflict with a constraint or decision explicitly discussed in the original conversation?
- Does the reviewer's suggestion introduce complexity that wasn't justified by the design goals?
- Is the reviewer surfacing a real gap, but proposing the wrong fix?
- Would adopting this feedback weaken a section the review itself marked as a Strength?
Phase 2: Revise
Apply changes based on the triage:
- For Accept and Accept (Alt) items: update the relevant sections of the design doc.
- For Reject items: do not change the doc. The reasoning will be captured in the revision notes.
- For Defer items: add them to the Open Questions section with context on what's needed to resolve them.
- Do NOT remove or weaken sections the review marked as Strengths.
Phase 3: Document the Triage
Add a ## Revision Notes section at the end of the design doc (before Future Extensions) with a table summarizing the triage:
## Revision Notes
**Revised [date]:** Addressed review feedback from `review-[slug].md`.
| # | Feedback Item | Disposition | Reasoning |
|---|---------------|-------------|-----------|
| 1 | [Brief description] | Accept | [Why this was valid] |
| 2 | [Brief description] | Accept (Alt) | [What you did instead and why] |
| 3 | [Brief description] | Reject | [Why this conflicts with design intent] |
| 4 | [Brief description] | Defer | [What's needed to resolve] |
Phase 4: Update Status and Report
- Update the doc's Status from
DrafttoRevisedand add a one-line changelog entry at the top (e.g.,**Revised [date]:** Addressed review feedback — added edge case table, clarified auth flow.). - Tell the user:
- What was changed (accepted items)
- What was rejected and why
- What was deferred to Open Questions
Writing Principles
- Synthesize, don't transcribe. The doc should read as a coherent design artifact, not a transcript of a conversation.
- Concrete over abstract. Use real names, precise file paths, strict data contracts, and code snippets from the discussion/codebase wherever possible.
- Surface decisions. Every significant choice should include a brief rationale. "We chose X because Y" is a design doc. "We chose X" is a changelog.
- Respect what was deferred. If the user explicitly said something is out of scope, put it in Non-Goals or Future Extensions — don't silently drop it or re-raise it.
- Own the gaps. If a required section has nothing from the conversation and couldn't be found in the codebase, write a placeholder that names what's needed. Don't fabricate details.
- Align with the vision. If
docs/vision/vision.mdexists, read it first and ensure the design fits the stated product direction. Note any tension in the Open Questions section. - User stories are the deliverable. The design sections are context. The user stories are what gets sent to GitHub issues for agents to implement. Every story must be self-contained: an agent reading only that story should know exactly what to do, which files to touch, and how to verify it's done.
- Acceptance criteria must be binary. "Works correctly" is not a criterion. "Returns 404 when user ID doesn't exist" is. Each criterion should be verifiable by running a test or checking a specific behavior.
- Every story needs testing. Backend/logic changes require unit tests. UI changes require browser verification. No exceptions.
- Every story needs documentation. If a story adds user-facing functionality, a CLI flag, an API endpoint, or changes architecture, its acceptance criteria must require updating the relevant docs. Name the specific file to update — don't say "update docs if applicable."
More from eho/agent-skills
user-story-implementer
Implement a single user story or task from a GitHub Issue backlog. Executes a single iteration by fetching the next open issue, assigning it, implementing the code, creating a branch and PR, and moving on. You MUST use this skill when asked to "implement a user story", "run one iteration", "do the next task", or "complete a task from the backlog".
36user-story-reviewer
Review an implemented user story or task (via GitHub Pull Request) for completeness, test coverage, and code quality. Use this when asked to QA, review a PR, verify implementation, or as a follow-up to the user-story-implementer skill.
28prd
Generate a Product Requirements Document (PRD) for a new feature. Use when planning a feature, starting a new project, or when asked to create a PRD. Triggers on: create a prd, write prd for, plan this feature, requirements for, spec out.
26prd-to-github-milestone
Parses a Product Requirements Document (PRD) to extract User Stories and creates corresponding GitHub Issues. It can optionally link them to a GitHub Milestone. This skill acts as a setup phase for GitHub-native issue tracking. Make sure to use this skill whenever the user asks to "send the PRD to GitHub", "create issues from the PRD", "setup the milestone", or mentions turning requirements into actionable GitHub issues.
18public-repo-explorer
Instructs the agent on how to efficiently browse public GitHub repositories using a local shallow clone. You MUST use this skill whenever the user asks you to scan, examine, clone, or extract information from a public Git repository, or whenever they provide a GitHub URL to explore.
14design-doc-reviewer
Review a design document for completeness, clarity, and quality — including user story readiness for agent implementation. Produces structured feedback with specific gaps, strengths, and a prioritized improvement checklist. Use when asked to review a design doc, critique a design, check a spec, review the PRD, or audit the requirements.
12