qa-report
QA Test Planner
Plan and document QA deliverables — test plans, test cases, regression suites, Figma validations, and bug reports — in a structured format compatible with qa-execution execution.
Required Inputs
- qa-output-path (optional): Directory where all QA artifacts are stored. When provided, create the directory if it does not exist. When omitted, use the current working directory. This path must match the same argument passed to
qa-executionwhen both skills are used together.
Shared Output Structure
All artifacts follow this directory layout, shared with qa-execution:
<qa-output-path>/qa/
├── test-plans/ # Test plan documents
├── test-cases/ # Individual test case files (TC-*.md)
├── issues/ # Bug reports (BUG-*.md)
├── screenshots/ # Visual evidence and Figma comparisons
└── verification-report.md # Generated by qa-execution
Procedures
Step 1: Resolve Output Directory
- If the user provided a
qa-output-pathargument, use that path. - Otherwise, default to the current working directory.
- Create the
qa/subdirectory under the resolved path, then createqa/test-plans/,qa/test-cases/,qa/issues/, andqa/screenshots/if they do not exist.
Step 2: Identify the Deliverable Type
Parse the user request to determine which deliverable to generate:
| Request Pattern | Deliverable | Output Path |
|---|---|---|
| "Create test plan for..." | Test Plan | test-plans/ |
| "Generate test cases for..." | Test Cases | test-cases/ |
| "Build regression suite..." | Regression Suite | test-plans/ |
| "Compare with Figma..." | Figma Validation | test-cases/ (TC-UI-*) |
| "Document bug..." | Bug Report | issues/ |
Step 3: Generate Test Plans
- Read
references/test_case_templates.mdfor the test plan structure. - Generate a test plan document with these mandatory sections:
- Executive summary with objectives and key risks.
- Scope definition (in-scope and out-of-scope).
- Test strategy and approach.
- Automation strategy covering which flows should become E2E, which remain manual-only, and which are blocked by environment gaps.
- Environment requirements (OS, browsers, devices).
- Entry criteria (what must be true before testing begins).
- Exit criteria (what must be true before testing ends, including pass-rate thresholds and automation follow-up expectations for critical flows).
- Risk assessment table (Risk, Probability, Impact, Mitigation).
- Timeline and deliverables.
- Write the plan to
<qa-output-path>/qa/test-plans/<feature-slug>-test-plan.md.
Step 4: Generate Test Cases
-
Read
references/test_case_templates.mdto select the appropriate template variant (Functional, UI, Integration, Regression, Security, Performance). -
Assign each test case an ID following the naming scheme:
Type Prefix Example Functional TC-FUNC- TC-FUNC-001 UI/Visual TC-UI- TC-UI-045 Integration TC-INT- TC-INT-012 Regression TC-REG- TC-REG-089 Security TC-SEC- TC-SEC-005 Performance TC-PERF- TC-PERF-023 Smoke SMOKE- SMOKE-001 -
Each test case must include:
- Priority: P0 (Critical) | P1 (High) | P2 (Medium) | P3 (Low).
- Objective: What is being validated and why.
- Preconditions: Setup requirements and test data.
- Test Steps: Numbered actions with an
**Expected:**result for each. - Edge Cases: Boundary values, null inputs, special characters.
- Automation Target:
E2E,Integration, orManual-only. - Automation Status:
Existing,Missing,Blocked, orN/A. - Automation Command/Spec: Existing spec path or command when known.
- Automation Notes: Why the case should be automated, remain manual, or is blocked.
-
Write each test case to
<qa-output-path>/qa/test-cases/<TC-ID>.md. -
When generating test cases interactively, execute
scripts/generate_test_cases.sh <qa-output-path>/test-cases.
Step 5: Build Regression Suites
-
Read
references/regression_testing.mdfor suite structure and execution strategy. -
Classify tests into tiers:
Suite Duration Frequency Coverage Smoke 15-30 min Daily/per-build Critical paths only Targeted 30-60 min Per change Affected areas Full 2-4 hours Weekly/Release Comprehensive Sanity 10-15 min After hotfix Quick validation -
Prioritize test cases using the shared priority scale:
- P0: Business-critical, security, revenue-impacting — must run always.
- P1: Major features, common flows — run weekly or more.
- P2: Minor features, edge cases — run at releases.
-
Mark automation candidates explicitly:
- Tag changed or regression-critical P0 and P1 public flows as
Automation Target: E2Ewhen the repository already has an E2E harness. - Tag bug-driven public regressions as
Automation Status: Missinguntilqa-executionconfirms the spec was added or updated. - Tag exploratory, visual-judgment, or unsupported flows as
Manual-onlyorBlockedwith a reason.
- Tag changed or regression-critical P0 and P1 public flows as
-
Define execution order: Smoke first (if fails, stop) → P0 → P1 → P2 → Exploratory.
-
Define pass/fail criteria:
- PASS: All P0 pass, 90%+ P1 pass, no critical bugs open.
- FAIL: Any P0 fails, critical bug discovered, security vulnerability, data loss.
- CONDITIONAL: P1 failures with documented workarounds, fix plan in place.
-
Write the suite document to
<qa-output-path>/qa/test-plans/<suite-name>-regression.md.
Step 6: Validate Against Figma Designs
Skip this step if Figma MCP is not configured.
- Read
references/figma_validation.mdfor the validation workflow. - Extract design specifications from Figma using MCP queries:
- Dimensions (width, height).
- Colors (background, text, border — exact hex values).
- Typography (font family, size, weight, line-height, color).
- Spacing (padding, margin).
- Border radius, shadows.
- Interactive states (default, hover, active, focus, disabled).
- Generate UI test cases (TC-UI-*) that compare each property against the implementation.
- Test responsive behavior at these standard viewports:
- Mobile: 375px.
- Tablet: 768px.
- Desktop: 1280px.
- When validation reveals discrepancies, generate a bug report following Step 7.
- Use
agent-browser(from theqa-executioncompanion skill) when browser-based verification is needed. The core loop is: open → snapshot → interact → re-snapshot → verify.
Step 7: Create Bug Reports
- Use the unified bug report format from
assets/issue-template.md, shared withqa-execution. - Assign a bug ID with the
BUG-prefix (e.g.,BUG-001). - Every bug report must include:
- Severity: Critical | High | Medium | Low.
- Priority: P0 | P1 | P2 | P3.
- Environment: Build, OS, Browser, URL.
- Reproduction: Exact steps to reproduce.
- Expected vs Actual: Clear descriptions.
- Impact: Users affected, frequency, workaround.
- Related: TC-ID if discovered during test case execution, Figma URL if UI bug.
- Write each bug report to
<qa-output-path>/qa/issues/<BUG-ID>.md. - When creating bug reports interactively, execute
scripts/create_bug_report.sh <qa-output-path>/issues.
Step 8: Validate Completeness
- Verify all generated test cases have an expected result for each step.
- Verify all bug reports have reproducible steps.
- Verify traceability: test cases reference requirements, bugs reference test cases.
- Verify every planned critical flow has an explicit automation annotation and that
MissingorBlockedstates include a reason. - Cross-reference against
../qa-execution/references/checklist.mdfor coverage gaps when planning for later execution.
Severity Definitions
| Level | Criteria | Examples |
|---|---|---|
| Critical | System crash, data loss, security breach | Payment fails, login broken |
| High | Major feature broken, no workaround | Search not working, checkout fails |
| Medium | Feature partial, workaround exists | Filter missing option, slow load |
| Low | Cosmetic, rare edge case | Typo, minor alignment |
Priority vs Severity Matrix
| Low Impact | Medium | High | Critical | |
|---|---|---|---|---|
| Rare | P3 | P3 | P2 | P1 |
| Sometimes | P3 | P2 | P1 | P0 |
| Often | P2 | P1 | P0 | P0 |
| Always | P2 | P1 | P0 | P0 |
Companion Skill: qa-execution
The qa-report and qa-execution skills share a common output directory and artifact format. The intended workflow:
- Plan first with
qa-report: generate test plans, test cases, and regression suites. - Execute with
qa-execution: run verification gates, exercise flows end-to-end, discover bugs, and add or update E2E coverage when the repository already supports it. - Document with
qa-report: create structured bug reports for issues found during execution.
When qa-execution runs after qa-report, it reads test cases from <qa-output-path>/qa/test-cases/ to inform its execution matrix, automation priorities, and reporting fields, then writes bugs to <qa-output-path>/qa/issues/ using the same unified template.
Error Handling
- If the
qa-output-pathdirectory cannot be created, report the error and fall back to the current working directory. - If Figma MCP is not configured, skip Figma validation steps and note the gap in the test plan.
- If
agent-browseris not available for UI validation, generate test cases as documentation for manual execution and note the limitation. - If the repository does not have a known E2E harness, mark affected cases as
Manual-onlyorBlockedinstead of inventing automation commands. - If the user provides a feature description that is too vague to generate test cases, ask for specific requirements, user flows, or acceptance criteria before proceeding.