write-user-story
Write User Story
User Story Structure
Every story must have these sections (in order):
- Purpose/Overview — What it achieves and why. Include business context, user persona, and links. Optionally add a Technical Details/Approach sub-section for architectural decisions without over-specifying implementation.
- Requirements/Scope — Functional requirements (specific, measurable). Non-functional requirements only when they have measurable impact. Out of Scope only when genuine ambiguity exists.
- Acceptance Criteria — Testable conditions in Given-When-Then format.
- Testing Guidance — Manual testing scenarios only. No unit/integration test specs.
Template
## Purpose/Overview
[High-level description of what this story achieves and why it's important.
Include business context, user persona, and links to related work.]
### Technical Details/Technical Approach (Optional)
[Architectural decisions or implementation strategy — don't over-specify.]
---
## Requirements/Scope
### Functional Requirements
1. [Specific functionality to implement]
2. [Input/output expectations]
3. [Business rules and constraints]
### Non-Functional Requirements (only if significant)
1. [Data integrity requirements]
2. [Performance — only if measurable impact]
3. [Security — only if specific requirements exist]
### Out of Scope (only if needed for clarity)
- [Include ONLY if there's genuine ambiguity about scope]
---
## Acceptance Criteria
**AC1: [Scenario name]**
- Given [initial context]
When [action occurs]
Then [expected outcome]
**AC2: [Error handling scenario]**
- Given [error condition]
When [action occurs]
Then [expected error behavior]
---
## Testing Guidance
### Manual Testing
**Scenario 1: [Primary user workflow]**
1. [Step-by-step instructions]
2. [Expected outcomes]
**Scenario 2: [Edge case]**
1. [Steps]
2. [Expected outcomes]
**Note:** Unit test specs, integration test code, and test data details belong in the implementation plan, not here.
---
## Additional Notes (optional)
[Risks, dependencies, or other considerations]
## Related Links (optional)
- [Design documents, API specs, related stories]
Writing Guidelines
Standard User Story Format
As a [user persona/role]
I want [goal/desire]
So that [benefit/value]
INVEST Principles
- Independent — deliverable separately
- Negotiable — details can be refined
- Valuable — delivers clear value
- Estimable — team can estimate effort
- Small — completable within one sprint
- Testable — clear verification criteria
Best Practices
Do's ✓
- Write from the user perspective — focus on value, not implementation
- Make acceptance criteria specific and testable
- Include error scenarios and edge cases
- Use consistent domain terminology
- Link dependencies and related stories
- Collaborate with developers, testers, and stakeholders
Don'ts ✗
- Don't write technical tasks as stories ("Refactor UserService" is a task, not a story)
- Don't be vague — "improve performance" needs metrics
- Don't skip acceptance criteria
- Don't add non-functional requirements unless they have measurable impact
- Don't add "Out of Scope" unless there's genuine ambiguity
- Don't include unit/integration test specs in Testing Guidance
- Don't over-specify implementation — leave the "how" to developers
Quick Reference Checklist
- Purpose clearly explains the value and context
- Requirements are specific and measurable
- Acceptance criteria are testable and unambiguous
- Error scenarios and edge cases are covered
- Testing guidance contains manual scenarios only
- Dependencies are identified and linked
- Story is sized for one sprint
- Technical approach is outlined if needed, but not over-specified
- Non-functional requirements included only if significant
- "Out of Scope" omitted unless genuinely needed
For deep-dive guidance on each section, see references/section-details.md. For common pitfalls with before/after examples, see references/pitfalls.md. For a complete example story, see references/example.md. For JIRA markup conversion, see references/jira.md.
More from folio-org/folio-eureka-ai-dev
document-feature
Use when the user asks to document an implemented feature. Analyze the diff from the base branch, infer the feature boundary and name, and generate behavioral feature documentation under docs/features/.
19unit-testing
Use when writing or reviewing Java unit tests. Enforces Mockito/JUnit 5 best practices - strict stubbing, no lenient mode, specific matchers, complete flow stubbing, Arrange-Act-Assert structure, and clear test naming.
16code-review
Use when the user asks to perform a code review, review code changes, analyze a diff, or audit code quality. Runs a structured review of git diff output covering security, correctness, performance, maintainability, and style. Produces a markdown report saved as a .md file named after the current branch.
12skill-feedback
Use when a user has finished using one installed skill and wants to preserve actionable feedback about that skill while the session context is still fresh
4write-bug
Use when creating, writing, or refining a bug report for a FOLIO project. Produces structured bug tickets with a clear summary, preconditions, numbered steps to reproduce, expected vs. actual results, and supporting evidence (logs, stack traces, screenshots). Also use when asked to file a defect, triage an issue, or prepare a bug for Jira. Optionally interacts with the user to gather missing context and can create the ticket via the Jira MCP integration.
4liquibase-migration
>-
4