wtf.spike
Spike
Run a time-boxed technical investigation. Core value: turns an unknown into a decision — produces concrete findings and a recommendation so the team can write specs confidently rather than guessing.
Process
0. GitHub CLI setup
Run steps 1–2 of ../references/gh-setup.md (install check and auth check). Stop if gh is not installed or not authenticated. Extensions are not required for this skill.
Skip this step if gh-setup was already confirmed this session.
1. Define the question
If the user described the investigation in their request, extract the core question from it. Otherwise apply ../references/questioning-style.md and ask "What question should this spike answer?" — header Spike question, options from specific questions inferred from any context provided (e.g. linked Epic, conversation).
The question must be specific and answerable — not "how does caching work?" but "is Redis or in-memory caching the right choice for our session store given our deployment constraints?" — and scoped to a decision the team actually needs to make.
Apply ../references/questioning-style.md and ask "How much time should this spike take?" — header Time box:
- 1 hour → quick feasibility check
- Half day → moderate investigation
- 1 day → deep dive with proof of concept
2. Identify the linked issue (optional)
Apply ../references/questioning-style.md and ask "Is this spike linked to an existing issue?" — header Linked issue:
- Candidates from
gh issue list --label "epic,feature" --state open --limit 5 - No linked issue — standalone investigation
If linked: fetch the issue to extract domain context, constraints, and success metrics that inform the investigation scope.
3. Research
Run all research in parallel using the Agent tool:
Codebase exploration:
- Search for existing implementations, prior attempts, or ADRs addressing the same question (domain nouns, patterns, imports)
- Load
docs/steering/TECH.mdper the best-effort consumer-side load in../references/steering-doc-process.mdfor constraints that rule out certain approaches - Identify integration points and dependencies the solution must respect
External research (if available):
- Use WebSearch/WebFetch for relevant documentation, benchmarks, or known trade-offs
Synthesise findings internally. Do not dump raw research at the user.
4. Derive 2–3 concrete approaches
For each approach:
- Name: short label (e.g. "Redis session store", "In-memory with TTL")
- Summary: one sentence describing what it involves
- Pros: 2–3 concrete advantages relevant to this codebase and constraints
- Cons: 2–3 concrete risks or costs
- Effort estimate: rough implementation cost (hours or days)
- Fit with TECH.md: does it align with the established stack and patterns?
5. Recommend
State a single recommendation:
"Recommend [Approach N] because [1–2 key reasons]. Main risk: [X], mitigated by [Y]."
If evidence is genuinely ambiguous or the spike revealed the question is harder than expected, say so clearly — recommend a proof of concept or a follow-up spike with a narrower question.
6. Review with user
Show the full analysis (approaches + recommendation). Then apply ../references/questioning-style.md and ask "Does this answer the question well enough to proceed?" — header Spike review:
- Yes — record the findings → write the spike doc
- Need more depth on one approach → explore a specific area further
- Question changed → the investigation revealed a different question
Apply any adjustments, then proceed.
7. Write the findings doc
Write to docs/spikes/<YYYY-MM-DD>-<slug>.md where <slug> is a 2–4 word kebab-case summary of the question (e.g. session-store-strategy).
Structure:
# Spike: <question>
**Date:** <YYYY-MM-DD>
**Time box:** <duration>
**Linked issue:** #<n> or —
## Question
<the specific question this spike answered>
## Approaches considered
### <Approach 1 name>
**Summary:** ...
**Pros:** ...
**Cons:** ...
**Effort:** ...
### <Approach 2 name>
...
## Recommendation
<recommendation text>
## Decision
<!-- Fill when the team decides -->
- [ ] Accepted — proceeding with [approach]
- [ ] Rejected — reason: ...
- [ ] Needs follow-up: ...
mkdir -p docs/spikes
git add docs/spikes/<filename>
git commit -m "docs(spike): <question summary>"
Print the file path.
8. Post to linked issue (if applicable)
If a linked issue exists, post a comment:
gh issue comment <issue_number> --body "🔬 Spike concluded: **<question>** → Recommendation: <one-line summary>. Full findings: docs/spikes/<filename>.md"
9. Offer next steps
Apply ../references/questioning-style.md and ask "What's next?" — header Next step:
-
Write an Epic from this → turn the recommendation into an Epic issue (default)
-
Write a Task from this → the spike uncovered a specific narrow change
-
Stop here → exit; the team will decide separately
-
Write an Epic → follow the
wtf.write-epicprocess, seeding it with the spike's recommendation and findings as context. -
Write a Task → follow the
wtf.write-taskprocess with the spike recommendation as the task description. -
Stop here → exit.
More from xiduzo/wtf
wtf.write-feature
This skill should be used when a user wants to create a GitHub Feature issue, break down an Epic into user-facing capabilities, write user stories in domain language, or capture what a domain actor can do — for example "create a feature", "write a feature for this epic", "add a feature to an epic", "break this epic into features", "write user stories for this feature", or "describe what this actor can do". Use this skill to write a single Feature; use `wtf.epic-to-features` to generate the full set of Features for an Epic at once. Not applicable to Tasks, Epics, or bug reports.
38wtf.write-task
This skill should be used when a user wants to create a task, write a ticket, decompose a feature into implementable work, break down a story, define a vertical slice for development, or write Gherkin scenarios — for example "create a task", "write a task for this feature", "break this feature into tasks", "define implementation work", or "add a sub-issue to this feature". Guides creation of a GitHub Task issue linked to a parent Feature and Epic, derives Gherkin acceptance scenarios from the Feature's ACs, enforces DDD ubiquitous language in scenarios, and checks for vertical-slice integrity and task dependencies.
38wtf.write-epic
This skill should be used when a user wants to create, draft, or plan a GitHub Epic issue — for example "write an epic", "I want to define a new initiative", "scope out this strategic project", "turn this idea into an epic", "plan work that spans multiple features", or "start from a bounded context". Also use when the user asks to define domain outcomes, capture a large initiative before breaking it into features, or describe work in terms of business goals rather than technical tasks.
38wtf.steer-design
This skill should be used when a team wants to create or refine the design guidelines document — for example "create the design steering doc", "document our design system", "write the design principles", "document our component patterns", "set up the design guidelines", or "update the design doc". Generates docs/steering/DESIGN.md as a living document capturing design principles, the design system, tokens, component patterns, and accessibility standards. Generated once and refined — not regenerated from scratch.
37wtf.reflect
This skill should be used when a developer wants to capture learnings from a difficult session, record what Claude got wrong, save implementation gotchas, or update the steering docs with hard-won knowledge — for example "let's reflect", "capture what we learned", "that was painful, save this", "update the steering docs with what went wrong", "I need to debrief", "what went wrong today", "log this lesson", "save this gotcha", "document this mistake", "I want to write this down before I forget", "add this to the steering docs", or when prompted by the intervention tracker after multiple corrections. Routes each learning into the right steering doc (TECH, QA, DESIGN, or VISION) under a "Hard-Won Lessons" section.
37wtf.verify-task
This skill should be used when a QA engineer wants to test or verify a completed task, run through acceptance criteria, check Gherkin scenarios against the implementation, record pass/fail results, or sign off on a ticket before merge. Triggers on phrases like "verify task #42", "run QA on this issue", "test the acceptance criteria", "sign off on task", "check if this task is ready to merge", "does this task meet its acceptance criteria", "run acceptance tests for task #X", "walk through the Gherkin for task #X", or "I want to test this task".
37