test-plan
Test Plan Skill
Generate a thorough, well-structured test plan for any software feature, system, or change.
Goal
Produce a test plan that a QA engineer or developer can immediately act on — one that covers the right scope, flags risk areas, and is specific enough to actually guide test execution.
Process
Step 1: Gather Context
Before writing anything, collect what you need. If the user has shared a spec, ticket, or description, extract from it. Otherwise ask for what's missing. Key questions:
- What is the feature? What does it do, what problem does it solve?
- Who are the users / actors? Who triggers this behavior?
- What's the tech stack / surface? (API, UI, mobile, background job, etc.)
- What already exists? Is this new or modifying existing behavior?
- What's the risk level? (Low-stakes internal tool vs. payment flow)
- Any known constraints or dependencies? (Auth, third-party services, feature flags)
If the user wants to get started quickly without answering questions, proceed with reasonable assumptions and call them out explicitly.
Step 2: Write the Test Plan
Structure the output as follows. Adapt section depth to the feature's complexity — a simple CRUD endpoint needs less than a multi-actor checkout flow.
Test Plan Template
# Test Plan: [Feature Name]
## Overview
One paragraph: what this feature does, why it matters, and what this test plan covers.
## Scope
### In Scope
- List of behaviors, flows, and components that will be tested
### Out of Scope
- Explicitly excluded areas (and why, if non-obvious)
## Test Approach
Brief narrative on the testing strategy: which test types apply, any tooling or environment notes, and how coverage is prioritized.
## Test Types
List which of the following apply and why:
- **Unit tests** — isolated logic, functions, pure computation
- **Integration tests** — interactions between components or services
- **End-to-end (E2E) tests** — full user flows through the system
- **API tests** — contract, request/response validation
- **UI/UX tests** — visual correctness, accessibility, responsiveness
- **Performance tests** — load, latency, throughput (flag if relevant)
- **Security tests** — auth, authorization, injection, data exposure
- **Regression tests** — existing behavior that must not break
## Test Scenarios
Organize by functional area or user flow. For each scenario:
| # | Scenario | Steps | Expected Result | Priority |
|---|----------|-------|-----------------|----------|
| 1 | Happy path: [description] | 1. ... 2. ... | ... | High |
| 2 | ... | | | |
Priority: High (must pass for launch) / Medium (should pass) / Low (nice to have)
## Edge Cases & Negative Tests
- Input validation: empty, null, very long, special characters, wrong types
- Boundary values: limits, maximums, minimums
- Concurrency: simultaneous requests, race conditions
- State transitions: invalid state changes, double-submits
- Failure modes: network errors, timeouts, third-party failures
- Permission edge cases: wrong role, expired session, cross-tenant access
## Acceptance Criteria
What must be true for this feature to be considered "done" from a testing perspective? Map back to the feature's stated requirements.
## Risks & Open Questions
- Known unknowns that need clarification before or during testing
- High-risk areas deserving extra attention
- Dependencies on external systems or data
## Test Data Requirements
What data is needed? (e.g., test accounts, specific DB states, mocked services)
## Environment Notes
Any specific setup, feature flags, or config needed to run these tests.
Step 3: Calibrate to Context
After generating the plan, briefly flag:
- Top 3 highest-risk scenarios (if not obvious from the plan)
- Any gaps you couldn't fill without more info
- Suggested priority order if the team needs to triage
Quality Bar
A good test plan from this skill should:
- Be specific enough that someone unfamiliar with the feature could execute it
- Cover both the happy path and realistic failure modes
- Make explicit what is not being tested (and why)
- Be proportional — don't pad a simple feature with 40 low-value test cases
- Use plain language; avoid vague phrases like "verify it works correctly"
Format Notes
- Default output: Markdown (renders well in Notion, Linear, GitHub, Confluence)
- If the user wants a Word doc or other format, use the appropriate skill
- Tables are preferred for test scenarios; prose for narrative sections
- Keep scenario descriptions action-oriented: "User submits form with missing required field" not "Test form validation"
More from blunotech-dev/agents
anti-purple-ui
Enforce a strict monochrome UI with a single high-contrast accent color, removing generic tech gradients and “AI-style” palettes. Use when the user wants minimal, anti-AI, or non-generic aesthetics, or says the UI looks too techy or generic.
9harmonize-whitespace
Align all spacing (padding, margins, gaps) to a consistent 4pt/8pt grid. Use when spacing feels off, inconsistent, cramped, or unbalanced, or when the user asks for a spacing scale or alignment fix.
9typographic-hierarchy
Improve typography by adjusting font sizes, weights, spacing, and contrast to create clear visual hierarchy and readability. Use when text feels flat, unstructured, or when the user asks to refine headings, type scale, or overall readability.
6micro-interaction-adder
Add polished CSS micro-interactions like hover effects, transitions, and feedback states to improve UI feel. Use when the user asks for animations, better UX, or when the interface feels static, plain, or unresponsive.
4consistent-border-radius
Normalizes rounded corners across a file so buttons, inputs, cards, modals, badges, and all UI elements share the exact same curvature. Use this skill whenever the user mentions inconsistent border radii, wants to unify rounded corners, asks to make UI elements look more cohesive, or says things like "make the corners match", "fix the rounding", "unify border radius", "standardize my rounded corners", or "buttons and cards don't match". Also trigger when the user pastes a CSS/HTML/JSX/TSX file and asks for a design consistency pass, border radius is one of the first things to normalize.
4component-split
Analyze a component and determine when and how to split it based on size, responsibility, and reuse signals, producing a refactored structure with clear boundaries. Use when users share large, mixed-concern, or hard-to-test components, or ask about splitting, refactoring, or improving component architecture.
3