plan-tests
Plan Tests
When to use
- After explore-app has produced exploration reports for all scenarios in
parsed-spec.md - When running as part of
run-testing-sessionpipeline (Stage 3) - When re-synthesizing the plan after a UI change or new exploration
Inputs
docs/playwright-spec-testing/project-context.md— existing infrastructure, conventions, routingdocs/playwright-spec-testing/exploration/<slug>.md— one file per scenario (all of them)docs/playwright-spec-testing/parsed-spec.md— scenario list and slugs.playwright-cli/(optional) — YAML snapshots and screenshots for selector cross-checking
What it does
Synthesize a complete, human-reviewable test plan grounded in observed DOM data and existing project architecture. Every assertion must be traceable to the exploration report. No invented selectors or URLs.
Phase 1: Read all inputs
- Read
project-context.md— note testDir, tech stack, existing specs, routing structure - Read
parsed-spec.md— extract scenario names and slugs - For each scenario, read
exploration/<slug>.md - Scan
.playwright-cli/for artifacts matching each scenario (if present):-
Match YAML snapshots and screenshots to scenarios by comparing filenames/timestamps against scenario slugs and the timestamps recorded in each
exploration/<slug>.md. -
For each scenario with matching artifacts, build an artifact context block:
## Artifact context: <scenario-slug> - Snapshot: .playwright-cli/page-<timestamp>.yml → selectors found: [list role/label/testid selectors] - Screenshot: .playwright-cli/screenshot-<timestamp>.png -
Append this block to that scenario's exploration report context before synthesis begins.
-
If
.playwright-cli/is absent or empty, skip this step entirely — proceed as before.
-
Phase 2: Write Application Overview
Write a prose paragraph describing:
- The feature under test
- The user journey (entry → action → outcome)
- Key pages and flows observed during exploration
Source this entirely from project-context.md routing sections and the observed URLs/page titles in the exploration reports. No inference — only what was observed.
Phase 3: Assign test file paths (interactive)
For each scenario:
- Read
## Test File Conventionsfromproject-context.mdfor existing test files. - If no existing test files exist, skip to step 5 and auto-assign the suggested new path without prompting.
- Rank the top 3 existing files by relevance:
- Same route prefix
- Feature name overlap in the filename
- Similar naming pattern
- Present options to the user:
Scenario: "[Scenario Name]"
Where should this test go?
1. tests/path/to/most-relevant.spec.ts ← [reason]
2. tests/path/to/second.spec.ts ← [reason]
3. tests/path/to/third.spec.ts ← [reason]
4. Create new file: tests/path/to/suggested-name.spec.ts (suggested)
Enter 1–4 (or type a custom path):
- Wait for user response.
- Record the chosen path for use in Phase 4.
Rules:
- The suggested new filename (option 4) follows the naming pattern from
project-context.md(lowercase, hyphens,.spec.ts). - File assignment always requires human input — even when
run-testing-sessionis incheckpoint_mode: auto. - If no existing test files exist, auto-assign option 4 and note it in the output report.
Phase 4: Write test-plan.md
Save to docs/playwright-spec-testing/test-plan.md.
Format:
# <Feature> Test Plan
## Application Overview
<Paragraph describing the feature, user journey, key pages/flows — synthesized from project-context.md and observed navigation. Written prose, not bullet points.>
## Test Scenarios
### 1. <Scenario Group>
**Seed:** `<test data or empty string>`
#### 1.1. <Scenario Name>
**File:** `tests/<path>/<file>.spec.ts`
**Steps:**
1. <Action description>
- expect: <observable outcome verbatim from exploration report>
- expect: <observable outcome>
2. <Action description>
- expect: ...
Rules when writing this file:
- Every
expect:line must be derived from something observed in the exploration report — no invented assertions - URLs, page titles, button labels, input placeholder text copied verbatim from exploration reports
- Application Overview is written prose, not bullet points
- File path is the one assigned interactively in Phase 3
- Seed is empty string unless the scenario requires specific test data
- Steps are numbered sequentially within each scenario
- Every step from the exploration report must appear as a step in the plan
- If a YAML snapshot contains a more precise selector than the exploration report (e.g., a
data-testidthe manual walk missed), use the snapshot's version and add a note:<!-- selector from .playwright-cli snapshot -->
Phase 5: Update parsed-spec.md
For each scenario, update the status:
### Status
- [x] Planned
- [x] Explored
- [x] Synthesized
- [ ] Generated
- [ ] Passing
Key Rules
- NEVER invent an assertion — every
expect:line must trace to an exploration report entry - NEVER write bullet-point Application Overview — it must be prose
- NEVER leave a scenario without a File assignment
- NEVER skip a step that appears in the exploration report
- All file paths relative to project root
Output
docs/playwright-spec-testing/test-plan.md- Updated
docs/playwright-spec-testing/parsed-spec.md
Report when done:
- Status: DONE | DONE_WITH_CONCERNS | BLOCKED | NEEDS_CONTEXT
- Number of scenarios synthesized
- Total steps written
- Total expect: lines written
- File paths assigned (list each scenario → file)
- Any exploration report that was missing or incomplete
More from lautaroleonhardt/pst
analyze-codebase
Use when starting a Playwright testing session or when project structure is unknown. Scans the project for Playwright config, test conventions, routing, and tech stack. Writes output to docs/playwright-spec-testing/project-context.md.
9ingest-spec
Use when you have a Gherkin .feature file or plain-English test cases to parse into structured scenarios. Writes output to docs/playwright-spec-testing/parsed-spec.md.
9generate-tests
Use after plan-tests to write a Playwright test for one scenario by mechanically translating test-plan.md into Playwright API calls. Requires docs/playwright-spec-testing/test-plan.md. Writes the test file at the path assigned in the plan.
9debug-test
Use when a Playwright test is failing. Diagnoses the root cause and applies a minimal fix. Requires the failing test file path and the full error output.
9explore-app
Use after ingest-spec to walk through one scenario in the live app and capture real selectors and URLs. Requires a running app and a scenario from docs/playwright-spec-testing/parsed-spec.md. Writes output to docs/playwright-spec-testing/exploration/<scenario-slug>.md.
9run-testing-session
Use to run the full Playwright testing pipeline (analyze → ingest → plan → explore → generate → debug) with isolated subagent context per stage. Each stage is reviewed and fixed automatically. Requires a running target app and a spec input.
9