qa-report

Installation
SKILL.md

QA Test Planner

Plan and document QA deliverables — test plans, test cases, regression suites, Figma validations, and bug reports — in a structured format compatible with qa-execution execution.

Required Inputs

  • qa-output-path (optional): Directory where all QA artifacts are stored. When provided, create the directory if it does not exist. When omitted, use the current working directory. This path must match the same argument passed to qa-execution when both skills are used together.

Shared Output Structure

All artifacts follow this directory layout, shared with qa-execution:

<qa-output-path>/qa/
├── test-plans/          # Test plan documents
├── test-cases/          # Individual test case files (TC-*.md)
├── issues/              # Bug reports (BUG-*.md)
├── screenshots/         # Visual evidence and Figma comparisons
└── verification-report.md  # Generated by qa-execution

Procedures

Step 1: Resolve Output Directory

  1. If the user provided a qa-output-path argument, use that path.
  2. Otherwise, default to the current working directory.
  3. Create the qa/ subdirectory under the resolved path, then create qa/test-plans/, qa/test-cases/, qa/issues/, and qa/screenshots/ if they do not exist.

Step 2: Identify the Deliverable Type

Parse the user request to determine which deliverable to generate:

Request Pattern Deliverable Output Path
"Create test plan for..." Test Plan test-plans/
"Generate test cases for..." Test Cases test-cases/
"Build regression suite..." Regression Suite test-plans/
"Compare with Figma..." Figma Validation test-cases/ (TC-UI-*)
"Document bug..." Bug Report issues/

Step 3: Generate Test Plans

  1. Read references/test_case_templates.md for the test plan structure.
  2. Generate a test plan document with these mandatory sections:
    • Executive summary with objectives and key risks.
    • Scope definition (in-scope and out-of-scope).
    • Test strategy and approach.
    • Automation strategy covering which flows should become E2E, which remain manual-only, and which are blocked by environment gaps.
    • Environment requirements (OS, browsers, devices).
    • Entry criteria (what must be true before testing begins).
    • Exit criteria (what must be true before testing ends, including pass-rate thresholds and automation follow-up expectations for critical flows).
    • Risk assessment table (Risk, Probability, Impact, Mitigation).
    • Timeline and deliverables.
  3. Write the plan to <qa-output-path>/qa/test-plans/<feature-slug>-test-plan.md.

Step 4: Generate Test Cases

  1. Read references/test_case_templates.md to select the appropriate template variant (Functional, UI, Integration, Regression, Security, Performance).

  2. Assign each test case an ID following the naming scheme:

    Type Prefix Example
    Functional TC-FUNC- TC-FUNC-001
    UI/Visual TC-UI- TC-UI-045
    Integration TC-INT- TC-INT-012
    Regression TC-REG- TC-REG-089
    Security TC-SEC- TC-SEC-005
    Performance TC-PERF- TC-PERF-023
    Smoke SMOKE- SMOKE-001
  3. Each test case must include:

    • Priority: P0 (Critical) | P1 (High) | P2 (Medium) | P3 (Low).
    • Objective: What is being validated and why.
    • Preconditions: Setup requirements and test data.
    • Test Steps: Numbered actions with an **Expected:** result for each.
    • Edge Cases: Boundary values, null inputs, special characters.
    • Automation Target: E2E, Integration, or Manual-only.
    • Automation Status: Existing, Missing, Blocked, or N/A.
    • Automation Command/Spec: Existing spec path or command when known.
    • Automation Notes: Why the case should be automated, remain manual, or is blocked.
  4. Write each test case to <qa-output-path>/qa/test-cases/<TC-ID>.md.

  5. When generating test cases interactively, execute scripts/generate_test_cases.sh <qa-output-path>/test-cases.

Step 5: Build Regression Suites

  1. Read references/regression_testing.md for suite structure and execution strategy.

  2. Classify tests into tiers:

    Suite Duration Frequency Coverage
    Smoke 15-30 min Daily/per-build Critical paths only
    Targeted 30-60 min Per change Affected areas
    Full 2-4 hours Weekly/Release Comprehensive
    Sanity 10-15 min After hotfix Quick validation
  3. Prioritize test cases using the shared priority scale:

    • P0: Business-critical, security, revenue-impacting — must run always.
    • P1: Major features, common flows — run weekly or more.
    • P2: Minor features, edge cases — run at releases.
  4. Mark automation candidates explicitly:

    • Tag changed or regression-critical P0 and P1 public flows as Automation Target: E2E when the repository already has an E2E harness.
    • Tag bug-driven public regressions as Automation Status: Missing until qa-execution confirms the spec was added or updated.
    • Tag exploratory, visual-judgment, or unsupported flows as Manual-only or Blocked with a reason.
  5. Define execution order: Smoke first (if fails, stop) → P0 → P1 → P2 → Exploratory.

  6. Define pass/fail criteria:

    • PASS: All P0 pass, 90%+ P1 pass, no critical bugs open.
    • FAIL: Any P0 fails, critical bug discovered, security vulnerability, data loss.
    • CONDITIONAL: P1 failures with documented workarounds, fix plan in place.
  7. Write the suite document to <qa-output-path>/qa/test-plans/<suite-name>-regression.md.

Step 6: Validate Against Figma Designs

Skip this step if Figma MCP is not configured.

  1. Read references/figma_validation.md for the validation workflow.
  2. Extract design specifications from Figma using MCP queries:
    • Dimensions (width, height).
    • Colors (background, text, border — exact hex values).
    • Typography (font family, size, weight, line-height, color).
    • Spacing (padding, margin).
    • Border radius, shadows.
    • Interactive states (default, hover, active, focus, disabled).
  3. Generate UI test cases (TC-UI-*) that compare each property against the implementation.
  4. Test responsive behavior at these standard viewports:
    • Mobile: 375px.
    • Tablet: 768px.
    • Desktop: 1280px.
  5. When validation reveals discrepancies, generate a bug report following Step 7.
  6. Use agent-browser (from the qa-execution companion skill) when browser-based verification is needed. The core loop is: open → snapshot → interact → re-snapshot → verify.

Step 7: Create Bug Reports

  1. Use the unified bug report format from assets/issue-template.md, shared with qa-execution.
  2. Assign a bug ID with the BUG- prefix (e.g., BUG-001).
  3. Every bug report must include:
    • Severity: Critical | High | Medium | Low.
    • Priority: P0 | P1 | P2 | P3.
    • Environment: Build, OS, Browser, URL.
    • Reproduction: Exact steps to reproduce.
    • Expected vs Actual: Clear descriptions.
    • Impact: Users affected, frequency, workaround.
    • Related: TC-ID if discovered during test case execution, Figma URL if UI bug.
  4. Write each bug report to <qa-output-path>/qa/issues/<BUG-ID>.md.
  5. When creating bug reports interactively, execute scripts/create_bug_report.sh <qa-output-path>/issues.

Step 8: Validate Completeness

  1. Verify all generated test cases have an expected result for each step.
  2. Verify all bug reports have reproducible steps.
  3. Verify traceability: test cases reference requirements, bugs reference test cases.
  4. Verify every planned critical flow has an explicit automation annotation and that Missing or Blocked states include a reason.
  5. Cross-reference against ../qa-execution/references/checklist.md for coverage gaps when planning for later execution.

Severity Definitions

Level Criteria Examples
Critical System crash, data loss, security breach Payment fails, login broken
High Major feature broken, no workaround Search not working, checkout fails
Medium Feature partial, workaround exists Filter missing option, slow load
Low Cosmetic, rare edge case Typo, minor alignment

Priority vs Severity Matrix

Low Impact Medium High Critical
Rare P3 P3 P2 P1
Sometimes P3 P2 P1 P0
Often P2 P1 P0 P0
Always P2 P1 P0 P0

Companion Skill: qa-execution

The qa-report and qa-execution skills share a common output directory and artifact format. The intended workflow:

  1. Plan first with qa-report: generate test plans, test cases, and regression suites.
  2. Execute with qa-execution: run verification gates, exercise flows end-to-end, discover bugs, and add or update E2E coverage when the repository already supports it.
  3. Document with qa-report: create structured bug reports for issues found during execution.

When qa-execution runs after qa-report, it reads test cases from <qa-output-path>/qa/test-cases/ to inform its execution matrix, automation priorities, and reporting fields, then writes bugs to <qa-output-path>/qa/issues/ using the same unified template.

Error Handling

  • If the qa-output-path directory cannot be created, report the error and fall back to the current working directory.
  • If Figma MCP is not configured, skip Figma validation steps and note the gap in the test plan.
  • If agent-browser is not available for UI validation, generate test cases as documentation for manual execution and note the limitation.
  • If the repository does not have a known E2E harness, mark affected cases as Manual-only or Blocked instead of inventing automation commands.
  • If the user provides a feature description that is too vague to generate test cases, ask for specific requirements, user flows, or acceptance criteria before proceeding.
Weekly Installs
26
GitHub Stars
301
First Seen
5 days ago
Installed on
opencode26
gemini-cli26
deepagents26
antigravity26
github-copilot26
codex26