detail
<tool_restrictions>
MANDATORY Tool Restrictions
BANNED TOOLS — calling these is a skill violation:
EnterPlanMode— BANNED. Do NOT call this tool. This skill IS the planning process. The steps below replace Claude's built-in planning entirely. You are NOT doing a task that needs plan mode — you ARE already executing a structured plan-creation process. Calling EnterPlanMode would bypass the skill and waste the user's time.ExitPlanMode— BANNED. You are never in plan mode. There is nothing to exit.
If you feel the urge to "plan before acting" — that urge is satisfied by following the <process> steps below. They ARE the plan. Execute them directly.
</tool_restrictions>
<required_reading> Read these reference files NOW:
- references/testing-patterns.md
- references/task-granularity.md
- references/arc-paths.md
Load these only if relevant:
- references/model-strategy.md — if dispatching build agents
- references/frontend-design.md — if UI work involved
- references/ai-sdk.md — if
aiin package.json
For UI work, also load interface rules:
- rules/interface/design.md — Visual principles
- rules/interface/colors.md — Color methodology
- rules/interface/spacing.md — Spacing system
- rules/interface/layout.md — Layout patterns
- rules/interface/animation.md — Motion rules
- rules/interface/forms.md — If forms involved
- rules/interface/interactions.md — Interaction patterns
- references/component-design.md — React component patterns </required_reading>
Use Glob tool to detect in parallel:
| Check | Glob Pattern |
|---|---|
| Test frameworks | vitest.config.*, playwright.config.*, jest.config.*, cypress.config.* |
| Package manager | pnpm-lock.yaml, yarn.lock, package-lock.json |
| Python project | requirements.txt, pyproject.toml |
Use Grep tool on package.json:
- Pattern:
"next"→ Next.js - Pattern:
"react"→ React
Record detected stack:
- Test runner: [vitest/jest/playwright/cypress/pytest]
- Package manager: [pnpm/yarn/npm/pip/uv]
- Framework: [next/react/fastapi/etc]
Step 2: Load Design Document
Find the design doc:
Glob: docs/arc/specs/*-design.md
Fallback: docs/plans/*-design.md
Pick the most recent one (highest date prefix). Read it. This is the source of truth for what to build.
Derive implementation plan filename: Replace -design.md with -implementation.md.
- Design:
docs/arc/specs/2025-06-15-user-dashboard-design.md - Implementation:
docs/arc/plans/2025-06-15-user-dashboard-implementation.md
Step 2.2: Lock File Structure Before Tasks
Before defining tasks, write a short file map:
- Which files will be created or modified
- What responsibility each file owns
- Where boundaries or interfaces matter
- Whether any file is already too large or too tangled for a clean change
If the design implies multiple independent subsystems, stop and split the work into separate plans instead of forcing everything into one implementation plan.
Extract from the design doc:
- User stories / acceptance criteria
- ASCII UI wireframes
- Data model
- Component structure
- API surface
Step 2.5: Find Reusable Patterns (Parallel Agents)
Spawn agents to find existing code to leverage:
Task Explore model: haiku: "Find existing patterns in this codebase that we can
reuse for: [list components/features from design].
Look for: similar components, utility functions, hooks, types, test patterns.
Structure your findings as:
## Reusable Code
- `file:line` — what it provides and how to use it
## Similar Implementations
- Feature and entry point file:line
## Essential Files for This Feature
List 5-10 files most critical to understand before implementing:
- `file.ts` — why it matters
"
Task Explore model: haiku: "Analyze coding conventions in this project. What naming patterns,
file organization, and architectural patterns should new code follow?"
If using unfamiliar libraries/APIs:
Task general-purpose model: haiku: "Gather documentation and best practices for
[library name] focusing on [specific feature needed]."
When agents complete:
- List reusable code (with file paths)
- Note conventions to follow
- Share Essential Files list — these should be read before implementation
- Update task breakdown to use existing utilities
Step 3: Break Down Into Tasks
Each task = one TDD cycle (2-5 minutes):
Task N: [Descriptive Name]
Files:
- Create: `exact/path/to/file.tsx`
- Modify: `exact/path/to/existing.tsx:42-58`
- Test: `exact/path/to/file.test.tsx`
Step 1: Write failing test
[exact test code]
Step 2: Run test, verify it fails
[exact command with expected output]
Step 3: Implement minimal code
[exact implementation code]
Step 4: Run test, verify it passes
[exact command with expected output]
Step 5: Commit
[exact commit command with message]
Checkpoint Tasks
When a task requires human judgment (visual verification, decisions, manual actions), mark it as a checkpoint:
Task N: [CHECKPOINT:VERIFY] Verify dashboard layout
After: Tasks 1-3 (agent starts dev server automatically)
Verify at http://localhost:3000/dashboard:
1. Desktop (>1024px): Sidebar visible, content fills remaining
2. Tablet (768px): Sidebar collapses
3. Mobile (375px): Single column layout
-> "approved" or describe issues
Task N: [CHECKPOINT:DECIDE] Select authentication provider
Options:
1. Clerk -- Best DX, pre-built UI, paid after 10k MAU
2. NextAuth -- Free, self-hosted, maximum control
3. Supabase Auth -- Built-in with our DB
-> Select: clerk, nextauth, or supabase
Rules:
- Automate everything possible before a checkpoint (start servers, deploy, etc.)
- Never ask user to run CLI commands -- agent does it
- Max 1 checkpoint per logical milestone
- See
references/checkpoint-patterns.md
Task ordering:
- Data/types first (foundation)
- Core logic (business rules)
- UI components (presentation)
- Integration (wiring together)
- E2E tests (full flow verification)
Step 4: Generate Test Commands
<test_commands> Based on detected test runner:
vitest:
# Single test file
pnpm vitest run src/path/to/file.test.tsx
# Single test
pnpm vitest run src/path/to/file.test.tsx -t "test name"
# Watch mode (for development)
pnpm vitest src/path/to/file.test.tsx
playwright:
# Single test file
pnpm playwright test tests/path/to/file.spec.tsx
# Single test
pnpm playwright test tests/path/to/file.spec.tsx -g "test name"
# With UI
pnpm playwright test --ui
jest:
# Single test file
pnpm jest src/path/to/file.test.tsx
# Single test
pnpm jest src/path/to/file.test.tsx -t "test name"
</test_commands>
Step 5: Include UI References
For each UI task, include all relevant visual + aesthetic references:
Task N: Create ProductCard Component
Aesthetic Direction (from design doc):
- Tone: [e.g., "luxury/refined"]
- Memorable: [e.g., "hover lift with shadow bloom"]
- Typography: [e.g., "GT Sectra display + IBM Plex Sans body"]
- Color: [e.g., "warm neutrals, gold accent"]
- Motion: [e.g., "subtle hover states, no page transitions"]
Figma Reference:
- URL: https://figma.com/design/xxx/yyy?node-id=123-456
- Screenshot: docs/arc/specs/assets/YYYY-MM-DD-topic/figma-123-456.png
- To fetch fresh context during implementation:
mcp__figma__get_design_context: fileKey="xxx", nodeId="123:456"
ASCII Wireframe (from design):
┌─────────────────┐
│ [image] │
├─────────────────┤
│ Product Name │
│ $99.00 │
│ [Add to Cart] │ ← hover lift + shadow bloom
└─────────────────┘
Implementation Notes:
- AVOID: Roboto/Arial/system-ui, purple gradients, generic shadows
- ENSURE: The hover effect is the memorable moment
Files:
- Create: `src/components/product-card.tsx`
- Test: `src/components/product-card.test.tsx`
...
Why all three (aesthetic + Figma + ASCII):
- Aesthetic direction = the creative vision
- ASCII = structure and layout intent
- Figma = exact implementation details
- All three ensure the result is intentional, not generic
Step 6: Write Implementation Plan
Header:
# [Feature Name] Implementation Plan
> **For Arc:** Use /arc:implement to execute this plan. Subagents should report DONE, DONE_WITH_CONCERNS, NEEDS_CONTEXT, or BLOCKED.
**Design:** `docs/arc/specs/YYYY-MM-DD-<topic>-design.md` (or legacy fallback path)
**Goal:** [One sentence from design doc's problem statement]
**Stack:** [Framework] + [Test runner] + [Package manager]
---
Tasks section: Write all tasks following the template from Step 3.
Save to: The filename derived in Step 2 (e.g., docs/arc/plans/2025-06-15-user-dashboard-implementation.md)
Step 6.5: Review The Plan Document
After writing the plan:
- Dispatch
agents/workflow/plan-document-reviewer.md - Fix issues in the plan
- Re-review until approved or after 5 loops escalate to the user
Step 7: Commit and Offer Next Steps
git add docs/arc/plans/
git commit -m "docs: add <topic> implementation plan"
Plan is ready. Tell the user the plan is saved and offer next steps as plain text. Do NOT call EnterPlanMode or ExitPlanMode. Just print the summary and ask what they want to do next.
<success_criteria> Implementation plan is complete when:
- Test framework detected
- Design document loaded
- Tasks broken into TDD cycles (2-5 min each)
- Each task has exact file paths
- Each task has test code + implementation code
- Each task has exact test commands
- ASCII UI references included for UI tasks
- Plan committed to git </success_criteria>
<tool_restrictions_reminder>
REMINDER: You must NEVER call EnterPlanMode or ExitPlanMode at any point during this skill — not at the start, not in the middle, not at the end. The plan file you just wrote IS the deliverable. Present it to the user as a normal message.
</tool_restrictions_reminder>