plan-tests

Installation
SKILL.md

Plan Tests

When to use

  • After explore-app has produced exploration reports for all scenarios in parsed-spec.md
  • When running as part of run-testing-session pipeline (Stage 3)
  • When re-synthesizing the plan after a UI change or new exploration

Inputs

  • docs/playwright-spec-testing/project-context.md — existing infrastructure, conventions, routing
  • docs/playwright-spec-testing/exploration/<slug>.md — one file per scenario (all of them)
  • docs/playwright-spec-testing/parsed-spec.md — scenario list and slugs
  • .playwright-cli/ (optional) — YAML snapshots and screenshots for selector cross-checking

What it does

Synthesize a complete, human-reviewable test plan grounded in observed DOM data and existing project architecture. Every assertion must be traceable to the exploration report. No invented selectors or URLs.

Phase 1: Read all inputs

  1. Read project-context.md — note testDir, tech stack, existing specs, routing structure
  2. Read parsed-spec.md — extract scenario names and slugs
  3. For each scenario, read exploration/<slug>.md
  4. Scan .playwright-cli/ for artifacts matching each scenario (if present):
    • Match YAML snapshots and screenshots to scenarios by comparing filenames/timestamps against scenario slugs and the timestamps recorded in each exploration/<slug>.md.

    • For each scenario with matching artifacts, build an artifact context block:

      ## Artifact context: <scenario-slug>
      - Snapshot: .playwright-cli/page-<timestamp>.yml  → selectors found: [list role/label/testid selectors]
      - Screenshot: .playwright-cli/screenshot-<timestamp>.png
      
    • Append this block to that scenario's exploration report context before synthesis begins.

    • If .playwright-cli/ is absent or empty, skip this step entirely — proceed as before.

Phase 2: Write Application Overview

Write a prose paragraph describing:

  • The feature under test
  • The user journey (entry → action → outcome)
  • Key pages and flows observed during exploration

Source this entirely from project-context.md routing sections and the observed URLs/page titles in the exploration reports. No inference — only what was observed.

Phase 3: Assign test file paths (interactive)

For each scenario:

  1. Read ## Test File Conventions from project-context.md for existing test files.
  2. If no existing test files exist, skip to step 5 and auto-assign the suggested new path without prompting.
  3. Rank the top 3 existing files by relevance:
    • Same route prefix
    • Feature name overlap in the filename
    • Similar naming pattern
  4. Present options to the user:
Scenario: "[Scenario Name]"

Where should this test go?

  1. tests/path/to/most-relevant.spec.ts  ← [reason]
  2. tests/path/to/second.spec.ts         ← [reason]
  3. tests/path/to/third.spec.ts          ← [reason]
  4. Create new file: tests/path/to/suggested-name.spec.ts  (suggested)

Enter 1–4 (or type a custom path):
  1. Wait for user response.
  2. Record the chosen path for use in Phase 4.

Rules:

  • The suggested new filename (option 4) follows the naming pattern from project-context.md (lowercase, hyphens, .spec.ts).
  • File assignment always requires human input — even when run-testing-session is in checkpoint_mode: auto.
  • If no existing test files exist, auto-assign option 4 and note it in the output report.

Phase 4: Write test-plan.md

Save to docs/playwright-spec-testing/test-plan.md.

Format:

# <Feature> Test Plan

## Application Overview
<Paragraph describing the feature, user journey, key pages/flows — synthesized from project-context.md and observed navigation. Written prose, not bullet points.>

## Test Scenarios

### 1. <Scenario Group>

**Seed:** `<test data or empty string>`

#### 1.1. <Scenario Name>

**File:** `tests/<path>/<file>.spec.ts`

**Steps:**
  1. <Action description>
    - expect: <observable outcome verbatim from exploration report>
    - expect: <observable outcome>
  2. <Action description>
    - expect: ...

Rules when writing this file:

  • Every expect: line must be derived from something observed in the exploration report — no invented assertions
  • URLs, page titles, button labels, input placeholder text copied verbatim from exploration reports
  • Application Overview is written prose, not bullet points
  • File path is the one assigned interactively in Phase 3
  • Seed is empty string unless the scenario requires specific test data
  • Steps are numbered sequentially within each scenario
  • Every step from the exploration report must appear as a step in the plan
  • If a YAML snapshot contains a more precise selector than the exploration report (e.g., a data-testid the manual walk missed), use the snapshot's version and add a note: <!-- selector from .playwright-cli snapshot -->

Phase 5: Update parsed-spec.md

For each scenario, update the status:

### Status
- [x] Planned
- [x] Explored
- [x] Synthesized
- [ ] Generated
- [ ] Passing

Key Rules

  • NEVER invent an assertion — every expect: line must trace to an exploration report entry
  • NEVER write bullet-point Application Overview — it must be prose
  • NEVER leave a scenario without a File assignment
  • NEVER skip a step that appears in the exploration report
  • All file paths relative to project root

Output

  • docs/playwright-spec-testing/test-plan.md
  • Updated docs/playwright-spec-testing/parsed-spec.md

Report when done:

  • Status: DONE | DONE_WITH_CONCERNS | BLOCKED | NEEDS_CONTEXT
  • Number of scenarios synthesized
  • Total steps written
  • Total expect: lines written
  • File paths assigned (list each scenario → file)
  • Any exploration report that was missing or incomplete
Related skills
Installs
9
First Seen
Apr 7, 2026