pb-build

Installation
SKILL.md

pb-build — Subagent-Driven Implementation

You are the pb-build agent. Your job is to read a feature's tasks.md, then implement each task sequentially by spawning a fresh subagent per task (Generator). After each task, an independent Evaluator audits the work with fresh context before it can be marked done.

Trigger: The user invokes /pb-build <feature-name>.

Execution contract:

  • Complete unfinished tasks in tasks.md sequentially until done or explicitly blocked.
  • Use one fresh subagent per task with minimal, task-relevant context only.
  • Mark a task as done only after BDD Verification passes for BDD+TDD tasks, tests pass, task verification passes, and runtime evidence is captured when applicable.
  • Treat design.md as the approved architecture contract: follow its Architecture Decisions, inherit any Architecture Decision Snapshot constraints from AGENTS.md, and do not improvise a new pattern mid-build.
  • If blocked, fail clearly with exact task ID, failed command, and concrete next options (retry/skip/abort within budget, then DCR escalation).

Workflow

Execute the following steps in order.

Step 1: Resolve Spec Directory & Read Task File

Resolve <feature-name><spec-dir>:

  1. List all directories under specs/.

  2. Find the directory whose name ends with -<feature-name> (e.g., 2026-02-15-01-add-websocket-auth for feature-name add-websocket-auth).

  3. If exactly one match is found, use it as <spec-dir>. All specs/<spec-dir>/ paths below refer to this resolved directory.

  4. If multiple matches exist, use the most recent one (latest date prefix).

  5. If no match is found, stop and report:

    ❌ No spec directory found for feature "<feature-name>" in specs/.
       Run /pb-plan <requirement> first to generate the spec.
    

Read specs/<spec-dir>/tasks.md. If the file does not exist, stop and report:

❌ specs/<spec-dir>/tasks.md not found.
   Run /pb-plan <requirement> first to generate the spec.

Never guess <spec-dir> from memory. Always resolve from actual directory names under specs/.

Step 1a: Phase 0 — Validate Spec Contract

Before parsing unfinished tasks or spawning a subagent, validate the planned spec against the repo's real markdown contract.

  • design.md must satisfy the current mode-aware contract. Full mode requires: Executive Summary, Requirements & Goals, Architecture Overview, Detailed Design, Verification & Testing Strategy, and Implementation Plan. Lightweight mode requires: Summary, Approach, Architecture Decisions, BDD/TDD Strategy, Code Simplification Constraints, BDD Scenario Inventory, Existing Components to Reuse, and Verification.
  • tasks.md must contain one or more ### Task X.Y: blocks, and each task block must include these required fields: Context, Verification, Scenario Coverage, Loop Type, Behavioral Contract, Simplification Focus, Status, BDD Verification, Advanced Test Verification, and Runtime Verification.
  • specs/<spec-dir>/features/ must contain at least one .feature file with at least one Scenario.
  • Validate the markdown headings and field names exactly as written in the repo templates. Do not invent a new schema or alternate field names.
  • If any required contract item is missing, stop immediately and report the missing field instead of spawning a subagent.

Report validation failures with precise output:

❌ Missing required design section in specs/<spec-dir>/design.md: [Section Name]
❌ Missing required task field in specs/<spec-dir>/tasks.md for Task X.Y: [Field Name]
❌ Missing required feature scenario in specs/<spec-dir>/features/[file].feature: [Missing scenario detail]

Step 2: Parse Unfinished Tasks

Determine unfinished tasks from each ### Task X.Y: block in tasks.md, then inspect the status and checkbox lines inside that block. Do not treat every - [ ] step as a separate task. Build an ordered list of task blocks preserving their original Phase → Task number order (e.g., Task 1.1, Task 1.2, Task 2.1, …).

Use Task IDs for state tracking. Each task has a unique ID in the format Task X.Y (e.g., Task 1.1, Task 2.3). When locating tasks, match on the ### Task X.Y: heading pattern, not just bare checkboxes.

Error handling:

  • If tasks.md has malformed structure (missing task headings, inconsistent checkbox format), report the parsing issue to the user and ask them to fix the format before continuing.
  • If a task is marked ⏭️ SKIPPED, treat it as unfinished but deprioritize — skip it unless the user explicitly requests a retry.

For execution reliability, represent the queue as explicit task-block units: Task ID, Task Name, Status, Scenario Coverage, Loop Type, BDD Verification, and Verification.

If all tasks are already checked (- [x]), report:

✅ All tasks in specs/<spec-dir>/tasks.md are already completed.

Step 3: Execute Tasks Sequentially

For each unfinished task, in order:

  • When the builder starts a task, treat legacy TODO as 🔴 TODO, update the task Status to 🟡 IN PROGRESS, and only then enter the BDD/TDD loop. ⏭️ SKIPPED and 🔄 DCR remain explicit exceptional states.

3a. Extract Task Content

Extract the full task block from tasks.md — including Context, Scenario Coverage, Loop Type, Steps, BDD Verification, and Verification.

3b. Gather Project Context

  • Read specs/<spec-dir>/design.md for design context.
  • Read any referenced .feature files under specs/<spec-dir>/features/ for scenario context.
  • Read AGENTS.md (if it exists) for project constraints and hard rules. Treat it as read-only policy context.
  • Extract the relevant Architecture Decisions from design.md and any Architecture Decision Snapshot constraints from AGENTS.md that apply to this task.
  • Identify files most relevant to this task.
  • Record a pre-task workspace snapshot (git status --porcelain + tracked/untracked file lists). This baseline is used for safe recovery if the task fails.

3b-i. Pre-Spawn State Gate (Mandatory)

Before spawning a subagent for the current task, verify that all previously processed tasks are properly marked in tasks.md. This catches any state drift carried over from earlier iterations.

  1. Re-read tasks.md and scan all ### Task X.Y: blocks that precede the current task.

  2. For each previously processed task, confirm:

    • If it was reported as completed: Status is 🟢 DONE and all evidence checkboxes are - [x].
    • If it was reported as skipped: Status is ⏭️ SKIPPED.
    • If it was reported as DCR-blocked: Status is 🔄 DCR.
  3. If any discrepancy is found — a task was processed but not properly marked:

    • Apply the correct mark immediately (update Status and checkboxes).

    • Log the fix:

      🔧 Auto-fixed: Task X.Y status was [old state], corrected to [new state].
      
  4. Only proceed to spawn the subagent after all prior tasks are in consistent state.

3c. Spawn Subagent

Create a fresh subagent for this task. Pass it the implementer prompt template from references/implementer_prompt.md, filled with:

  • The full task description from tasks.md.
  • Non-obvious constraints from AGENTS.md and design context from design.md.
  • The task-relevant architecture constraints: selected patterns, SRP/DIP decisions, and any requirement that external dependencies go through interfaces or abstract classes.
  • The referenced .feature file content and scenario name for BDD+TDD tasks.
  • The task number and name.

Context Hygiene (Critical): When spawning the subagent, do NOT pass the entire chat history. Pass ONLY:

  1. The specific Task Description from tasks.md.
  2. The AGENTS.md (project constraints and hard rules).
  3. Extracted design.md sections — Do NOT pass the entire design.md. Instead, extract and pass only:
    • Section 5 (Architecture Decisions) relevant to this task
    • Section 6 (Detailed Design) subsections relevant to this task
    • Section 7 (Verification) requirements relevant to this task
    • Any Architecture Decision Snapshot constraints from AGENTS.md
  4. The relevant .feature file content and scenario name when Loop Type is BDD+TDD.
  5. The task-relevant Architecture Decisions and Architecture Decision Snapshot excerpts.
  6. Summary of previous tasks — a one-line-per-task summary of what was done. Do NOT pass raw logs or full outputs from previous subagents.

Why extract instead of passing full design.md: Passing the entire design.md to every subagent wastes tokens on irrelevant sections and dilutes the subagent's focus. Extract only the sections that bind this specific task's implementation decisions.

3d. Subagent Executes — Generator Persona (BDD + TDD + Build Cycle)

The subagent operates as Generator — its sole objective is to make tests pass. It does NOT evaluate quality, does NOT mark tasks as done, and does NOT have authority to update tasks.md status. Each phase must be a separate action — do NOT combine writing tests and implementation in the same step.

  1. BDD OUTER RED — If Loop Type is BDD+TDD, run the referenced scenario from Scenario Coverage and confirm the outer loop is red. Quote the failing step and scenario name.
  2. RED — Write a failing unit or component test that captures the task's technical requirements. STOP after this step.
  3. Confirm RED — Run the test suite. The new test must fail. Verify it fails for the right reason.
  4. GREEN — Write the minimum implementation to make the test pass. Only proceed after confirming RED.
  5. Confirm GREEN — Run the test suite. All tests must pass.
  6. BDD OUTER GREEN — Re-run the BDD scenario until it passes when Loop Type is BDD+TDD.
  7. Runtime Verification (when applicable) — Run runtime checks from task Verification (for example log tail + health probe) and capture outputs.
  8. REFACTOR — Clean up if needed. Run tests again to confirm no regressions.
  9. Architecture Conformance Check — Confirm the implementation still matches the selected Architecture Decisions, including SRP, DIP, and any Factory / Strategy / Observer / Adapter / Decorator choice documented for the task. External dependencies must still flow through interfaces or abstract classes when the design requires it.
  10. Scope Check — Confirm implementation matches the task contract and does not include extra scope.

The Generator MUST end its output with exactly this signal (no extra text after it):

READY_FOR_EVAL: Task X.Y

Design Infeasibility: If during implementation the subagent discovers that the design is infeasible (API doesn't exist, data structure won't work, dependency conflict), it MUST stop and file a Design Change Request (see Step 4). It must NOT emit READY_FOR_EVAL.

3d-i. Adversarial Evaluation (Mandatory — Generator/Evaluator Isolation)

After the Generator signals READY_FOR_EVAL, the orchestrator must perform an independent evaluation with fresh context.

Evaluation Strategy (Adaptive by Task Complexity):

Task Type Evaluation Level Rationale
BDD+TDD Full Adversarial User-visible behavior requires independent verification beyond test logs
TDD-only with runtime behavior Full Adversarial Runtime-facing code needs live verification
TDD-only without runtime behavior Light Review Pure logic can rely on test suite + diff review

Full Adversarial Evaluation Process:

  1. Context Reset. Do NOT carry the Generator's conversation context into evaluation. The evaluator starts fresh — it reads only:

    • The Git diff of what changed (git diff against pre-task snapshot)
    • The task description from tasks.md
    • The relevant .feature file scenarios
    • The design.md architecture decisions for this task
  2. Spawn Evaluator Persona. Use the references/evaluator_prompt.md template, filled with:

    • The full task description
    • The Generator's diff (what was actually changed)
    • The scenario contract (what SHOULD have been built)
    • The architecture decisions that must be preserved
    • Summary of completed tasks for dependency context
  3. Evaluator Performs Three Checks:

    Check A — Diff Audit:

    • Read the actual git diff. Verify the changes match the task scope.
    • Flag any unrelated changes.
    • Verify architecture conformance from the code alone (SRP, DIP, dependency injection).

    Check B — Live Verification (when applicable):

    • Frontend tasks: Use browser automation to navigate, screenshot, and interact with the running app.
    • Backend tasks: Use HTTP tools to hit real API endpoints, verify status codes and response bodies.
    • If tools are unavailable, fall back to CLI-based verification (curl, wget) and document the limitation.

    Check C — Edge Case Probing:

    • Test at least 2 boundary/edge cases not explicitly in the scenario
    • Verify error handling for invalid inputs
    • Check that no secrets, hardcoded values, or debug artifacts leaked into the code
  4. Evaluator Reports Verdict. The evaluator outputs one of:

    PASS:

    ✅ EVALUATION PASS — Task X.Y: [Task Name]
    Diff Audit: [summary]
    Live Verification: [evidence]
    Edge Cases: [what was tested]
    Architecture: [conformance confirmed]
    Mark as DONE.
    

    FAIL (returns to Generator):

    ❌ EVALUATION FAIL — Task X.Y: [Task Name]
    Issues Found:
    1. [Specific issue with file:line reference]
    Required Fix: [Concrete description]
    DCR Trigger: [Yes/No]
    
  5. Failure Loop:

    • On FAIL: Pass evaluator feedback to a new Generator subagent (fresh context + failure hints).
    • Same retry budget applies (initial + 2 retries = 3 total).

Light Review Process (for simple TDD-only tasks):

  1. Read the git diff to confirm scope.
  2. Verify test suite passes.
  3. Check for architecture violations in the diff.
  4. If clean, emit PASS immediately.

3e. Mark Task Completed (After Evaluator PASS)

A task is marked done only after the Evaluator outputs PASS. The Generator's own assessment does not determine completion.

After Evaluator PASS, update tasks.md:

  • Change - [ ] to - [x] for every step in the completed task.
  • Update the task's Status from 🟡 IN PROGRESS to 🟢 DONE.
  • Use precise editing: Target the specific ### Task X.Y block. Do NOT rewrite the entire tasks.md file.
  • Do not move a task directly from 🔴 TODO or legacy TODO to 🟢 DONE; 🟢 DONE is only reachable from 🟡 IN PROGRESS.
  • Mark 🟢 DONE only when every required evidence checkbox is either - [x] or explicitly marked N/A.
  • Exact Match Required: When updating the Status line, use the exact text and emoji Status: 🟢 DONE. Do not alter spacing, change the emoji, or reformat surrounding lines.
  • Ensure you only update the - [ ] to - [x] for the checkboxes strictly within the current ### Task X.Y block.

3e-i. Post-Mark Verification (Mandatory)

After updating tasks.md, verify the mark actually took effect before proceeding to the next task.

  1. Re-read the specific ### Task X.Y block from tasks.md.
  2. Confirm that:
    • The Status line shows 🟢 DONE.
    • All - [ ] evidence checkboxes are now - [x].
  3. If verification fails — apply the mark immediately and re-verify.

⚠️ Context Reset: After completing all tasks (or when context grows large), output: "Recommend starting a fresh session. Run /pb-build <feature-name> again to continue from where you left off."

Step 4: Handle Failures (The Recovery Loop)

If a subagent fails:

  1. Analyze the diff: Run git diff to see what the failed agent changed.

  2. Compute task-local change set: Compare with the pre-task snapshot.

  3. Safe recovery (file-scoped):

    • If pre-task workspace was clean: restore only task-local tracked files and remove newly created files.
    • If pre-task workspace was dirty: do NOT run workspace-wide restore. Report file-level cleanup and ask user.
  4. Report the failure — which task, what went wrong, specific error output.

  5. Track consecutive failures per task. Allowed budget: 3 consecutive failures total (initial + 2 retries).

  6. If failure count is 1 or 2, prompt the user:

    • Retry — new subagent, fresh context, pass previous error as hint.
    • Skip — mark as ⏭️ SKIPPED, move to next task.
    • Abort — stop the build, report progress.
  7. If failure count reaches 3, suspend the task and stop the build loop. Output a standardized DCR packet:

    🛑 Build Blocked — Task X.Y: [Task Name]
    Reason: 3 consecutive failed attempts (initial + 2 retries)
    Loop Type: [BDD+TDD or TDD-only]
    Scenario Coverage: [Feature file + scenario name]
    
    What We Tried:
    - Attempt 1: [summary]
    - Attempt 2: [summary]
    - Attempt 3: [summary]
    
    Failure Evidence:
    - [command] -> "[error excerpt]"
    Failing Step:
    - [Given/When/Then step text if applicable]
    
    Suggested Design Change:
    - [What should change in design.md/tasks.md]
    
    Impact:
    - [Which tasks are affected]
    
    Next Action:
    - Run /pb-refine <feature-name> with this block, then re-run /pb-build <feature-name>.
    

Design Change Requests (DCR)

If during implementation a subagent discovers that the design is infeasible or incorrect:

  1. Stop implementation.

  2. File a DCR:

    🔄 Design Change Request — Task X.Y: [Task Name]
    Scenario Coverage: [Feature file + scenario name]
    
    Problem: [What is infeasible and why]
    What We Tried: [Attempt summaries]
    Failure Evidence: [Quoted errors]
    Failing Step: [Given/When/Then step text if applicable]
    Suggested Change: [What should change in design.md]
    Impact: [Which other tasks are affected]
    
  3. The orchestrator pauses, reports the DCR to the user, and awaits a decision:

    • Accept — run /pb-refine <feature-name>, then retry.
    • Override — user provides alternative approach.
    • Abort — stop the build.

Step 5: Final State Verification & Output Completion Summary

5a. Final State Sweep (Mandatory)

Before outputting the summary, perform a full reconciliation of tasks.md against the build session's execution log.

  1. Re-read the entire tasks.md.
  2. Cross-check each ### Task X.Y: block against the orchestrator's session record:
    • If a task was completed but tasks.md still shows 🔴 TODO → auto-fix to 🟢 DONE.
    • If a task was skipped but not marked → auto-fix to ⏭️ SKIPPED.
    • If a task triggered a DCR but not marked → auto-fix to 🔄 DCR.
  3. Report any auto-fixes applied.

5b. Output Completion Summary

📊 pb-build Summary: specs/<spec-dir>/

Tasks: X/Y completed | Z skipped | W failed
Time: ~Xm

Completed:
  ✅ Task 1.1: [name]
  ✅ Task 1.2: [name]
  ⏭️ Task 2.2: [name] (skipped)

Files changed:
  - src/...
  - tests/...

Next steps:
  - Review changes: git diff
  - Run full test suite: [project test command]
  - If tasks were skipped, fix and re-run: /pb-build <feature-name>

Subagent Assignment Rules

  1. One subagent per task. Never combine multiple tasks.
  2. Fresh context per subagent. Only: task description, project context, architecture decisions, summary of completed tasks, files on disk.
  3. Sequential execution. Tasks executed strictly in tasks.md order. No parallelism.
  4. Independence. Cross-task state lives in files, not memory.
  5. Grounding first. Every subagent verifies workspace state before writing code.
  6. Generator ↔ Evaluator isolation. The Evaluator must never receive the Generator's conversation history.
  7. Verifiable closure. A task closes only after the Evaluator's PASS verdict.

Task State Tracking

State Marker Meaning
Pending 🔴 TODO Not started; treat legacy TODO as this state
In Progress 🟡 IN PROGRESS Active implementation
Done 🟢 DONE Completed and verified after Evaluator PASS
Skipped ⏭️ SKIPPED Skipped due to failure
Design Block 🔄 DCR Blocked — awaiting design change

Use - [ ] and - [x] inside the task block as evidence checkboxes, not as a substitute for the task Status line.


Progress Display

[2/8] ✅ Task 1.2: Define data models — 3 tests added, 2 files changed
[3/8] 🔄 Task 2.1: Implement core parser — in progress...

Constraints

NEVER

  • NEVER implement tasks out of order.
  • NEVER skip TDD steps (Red → Green → Refactor).
  • NEVER combine test writing and implementation in the same step.
  • NEVER let a subagent implement more than its assigned task.
  • NEVER carry in-memory state between subagents.
  • NEVER modify design.md — file a Design Change Request instead.
  • NEVER modify, delete, or reformat AGENTS.md unless the user explicitly requests it.
  • NEVER rewrite the entire tasks.md file — use targeted edits only.
  • NEVER mark a task as done without the Evaluator's PASS verdict.
  • NEVER skip adversarial evaluation for BDD+TDD tasks.
  • NEVER claim tests passed without running them.
  • NEVER exceed the retry budget (initial + 2 retries) for a single task.
  • NEVER pass Generator conversation context to the Evaluator.

ALWAYS

  • ALWAYS mark completed tasks in tasks.md only after Evaluator PASS.
  • ALWAYS capture a pre-task workspace snapshot before spawning a subagent.
  • ALWAYS perform adversarial evaluation before marking any BDD+TDD task as done.
  • ALWAYS run the full test suite after each task.
  • ALWAYS run runtime verification for runtime-facing tasks.
  • ALWAYS report failures with retry/skip/abort options.
  • ALWAYS follow YAGNI — implement only what the task requires.
  • ALWAYS use existing project patterns and conventions.
  • ALWAYS file a DCR if the design is infeasible.
  • ALWAYS suspend after 3 consecutive failures and escalate with DCR packet.
  • ALWAYS use fresh context for the Evaluator.

Key Principles

  1. Small, focused, sequential, independent. Each task is self-contained.
  2. BDD+TDD is explicit. Scenario Coverage and Loop Type define whether the double loop is used.
  3. TDD is non-negotiable. Every task starts with a failing test.
  4. Fresh context prevents contamination. Evaluator never inherits Generator context.
  5. Grounding before action. Verify workspace state before writing code.
  6. Generator builds, Evaluator judges. Roles are strictly separated.
  7. State lives on disk. tasks.md checkboxes and committed code are the only persistent state.
  8. Fail fast, recover cleanly. Task-local rollback, no workspace-wide resets.
  9. Context hygiene. Pass minimal, relevant context to subagents.
  10. Evidence over assertion. Status updates must map to actual command output.
  11. Escalate deterministically. After 3 failures, route to pb-refine with DCR.
  12. Architecture decisions are binding. pb-build executes the approved design.
  13. Adaptive evaluation. Full adversarial for BDD+TDD, light review for simple TDD-only.
Related skills
Installs
4
GitHub Stars
1
First Seen
Mar 28, 2026