swing-clarify
Scope Clarifier
Prevents the most common AI failure: rushing to execute before understanding what's actually needed.
Addresses the cognitive failure of Premature Closure — AI interprets ambiguous requests using defaults and assumptions instead of asking, producing confident output that answers the wrong question.
Rules (Absolute)
- Never execute before clarifying. If ambiguity score is above threshold, generate questions FIRST. Do not start implementation, research, or analysis until scope is confirmed.
- Maximum 3 questions. Respect the user's time. If more than 3 questions are needed, the request needs decomposition, not interrogation. Ask the 3 highest-impact questions.
- Questions must be actionable. Every question must change what you build. "What's your timeline?" is only valid if it affects scope. "Should this handle authentication?" is always valid if auth wasn't mentioned.
- Prefer multiple choice over open-ended. "Should auth use (a) session cookies, (b) JWT, or (c) OAuth2 with a provider?" beats "How should auth work?"
- State your default assumption. For each question, state what you WOULD assume if the user doesn't answer. This lets them skip questions where the default is fine.
- Clear requests get a green light, not questions. If the request is unambiguous, say so and proceed. Do not ask questions for the sake of asking.
- Never block on style preferences. Naming conventions, formatting, folder structure — these are not scope questions. Use project conventions or sensible defaults.
Process
Stage 1: 5W1H Decomposition
Break the request into six dimensions:
| Dimension | Question | Example Gap |
|---|---|---|
| What | What exactly is being built/changed? | "Build auth" — login? signup? password reset? SSO? |
| Who | Who uses this? What roles/permissions? | "Users can edit" — all users? admins only? owners? |
| Where | Where does this live? What system/service? | "Add to the API" — which API? new endpoint? existing? |
| When | What triggers this? What's the lifecycle? | "Send notifications" — real-time? batched? on what event? |
| Why | What problem does this solve? What's the success criteria? | "Improve performance" — latency? throughput? cost? |
| How | Are there constraints on implementation? | "Use the existing stack" — which parts? any exceptions? |
Stage 2: Ambiguity Scoring
For each dimension, rate clarity:
- Clear — explicitly stated or unambiguously implied by context
- Assumable — not stated, but a reasonable default exists (state the default)
- Ambiguous — multiple valid interpretations, wrong guess = wasted work
Count the Ambiguous dimensions:
- 0 Ambiguous → Green light. Proceed immediately. State: "Scope is clear. Proceeding."
- 1-2 Ambiguous → Quick clarify. Ask 1-2 targeted questions with defaults.
- 3+ Ambiguous → Must clarify. Ask up to 3 highest-impact questions. Do not proceed until answered.
Stage 3: Generate Clarifying Questions
For each Ambiguous dimension (up to 3, prioritized by impact):
**Q[N]: [Specific question]**
Options: (a) [option] (b) [option] (c) [option]
My default: (b) — [why this is the reasonable default]
Impact: [what changes depending on the answer]
Stage 4: Confirm and Proceed
After receiving answers (or user says "go with defaults"):
- Restate the clarified scope in 3-5 bullet points
- Note any remaining assumptions (rated as Assumable, not Ambiguous)
- Proceed with execution
Output Format
## Scope Analysis: [Request Summary]
### 5W1H Decomposition
| Dimension | Status | Detail |
|:----------|:-------|:-------|
| What | Clear / Assumable / Ambiguous | [extracted or missing info] |
| Who | Clear / Assumable / Ambiguous | [extracted or missing info] |
| Where | Clear / Assumable / Ambiguous | [extracted or missing info] |
| When | Clear / Assumable / Ambiguous | [extracted or missing info] |
| Why | Clear / Assumable / Ambiguous | [extracted or missing info] |
| How | Clear / Assumable / Ambiguous | [extracted or missing info] |
### Ambiguity Score: [N]/6 → [Green light / Quick clarify / Must clarify]
### Questions (if any)
**Q1: [Question]**
Options: (a) ... (b) ... (c) ...
My default: (b) — [reason]
Impact: [what changes]
**Q2: ...**
### Clarified Scope (after answers)
- [bullet 1]
- [bullet 2]
- [bullet 3]
### Remaining Assumptions
- [assumption 1] — rated Assumable because [reason]
Quality Calibration
BAD Scope Clarification (Don't Do This)
User: "Build me an auth system"
AI: I have a few questions before we start:
1. What programming language would you like to use?
2. What database do you prefer?
3. Do you want tests?
4. What's your timeline?
5. How many users do you expect?
6. Should it be containerized?
7. What CI/CD platform do you use?
Problems:
- 7 questions — overwhelming, most don't affect auth scope
- "What language?" — look at the project, it's obvious
- "Do you want tests?" — always yes, not a scope question
- "What's your timeline?" — doesn't change what you build
- No multiple choice, no defaults stated
- No 5W1H structure, just a brain dump
GOOD Scope Clarification (Do This)
User: "Build me an auth system"
## Scope Analysis: Auth System
### 5W1H Decomposition
| Dimension | Status | Detail |
|:----------|:-------|:-------|
| What | Ambiguous | "Auth system" — login only? signup? password reset? SSO? |
| Who | Ambiguous | User roles not specified — single role or RBAC? |
| Where | Clear | Next.js app (inferred from project structure) |
| When | Clear | Standard request-response auth flow |
| Why | Assumable | Secure user access (standard) |
| How | Assumable | Project uses TypeScript + Prisma (from codebase) |
### Ambiguity Score: 2/6 → Quick clarify
**Q1: What auth flows are needed?**
Options: (a) Email/password login only (b) Login + signup + password reset (c) Full OAuth2 with Google/GitHub providers
My default: (b) — most apps need the complete basic flow
Impact: (a) = 2 hours, (b) = 4 hours, (c) = 8 hours + provider setup
**Q2: User roles?**
Options: (a) Single role — all users equal (b) Admin + User (c) Custom RBAC with permissions
My default: (a) — add roles later when needed (YAGNI)
Impact: (c) requires permission tables, middleware, and role management UI
Why this is better:
- Only 2 questions, both high-impact
- Multiple choice with clear options
- Defaults stated with reasoning
- Impact quantified (hours, complexity)
- 4 dimensions already resolved from context (no wasted questions)
- 5W1H structure makes the analysis transparent
When to Use
- Start of any non-trivial task or feature request
- When a request contains words like "system", "module", "feature", "improve", "fix" without specifics
- When you catch yourself making assumptions about what the user wants
- When a task could take 2 hours or 2 weeks depending on interpretation
- Before invoking any other Stack Skill (clarify scope first, then research/review/plan)
When NOT to Use
- Bug reports with reproduction steps (scope is the bug)
- Explicit, detailed specifications ("Add a GET /users endpoint returning id and name")
- Follow-up tasks in an ongoing conversation where context is established
- Quick questions ("What does this function do?")
- When the user explicitly says "just do it" or "use your judgment"
Gotchas
- Don't ask what you can read. If the project has a
package.json,tsconfig.json, or existing code — check it before asking "what language/framework?" That's not clarification, it's laziness. - 3 questions max is a hard ceiling. If you need more, the request needs decomposition into sub-tasks, not more interrogation.
- Style preferences are not scope questions. Naming conventions, folder structure, formatting — use project conventions or sensible defaults. Never block on these.
- State your default for every question. The user should be able to say "go with defaults" and skip all questions. If your defaults aren't stated, you've failed.
- Green light means GO. If the request is clear, say so and proceed immediately. Do not ask questions for the sake of appearing thorough.
Integration Notes
- Before everything: swing-clarify is designed to run FIRST. Clarified scope feeds into all other skills.
- With swing-research: Clarified scope → focused research questions (prevents researching the wrong thing)
- With swing-review: Clarified scope → review against actual requirements (prevents reviewing against assumed requirements)
- With swing-options: Clarified constraints → better option generation (constraints define what's conventional vs unconventional)
- With swing-mortem: Clarified assumptions → more specific failure scenarios
- With swing-trace: Clarified scope → clearer claim isolation in Stage 1 (fewer ambiguous assumptions to trace)
More from whynowlab/stack-skills
skill-composer
Compose multiple skills into a unified workflow pipeline. Combine research, creativity, review, and other skills into custom multi-step processes. Use when a task requires chaining skills together, creating custom workflows, or designing compound skill sequences. Triggers on "워크플로우", "workflow", "파이프라인", "pipeline", "스킬 조합", "combine skills", "복합 프로세스".
13tiered-test-generator
Generate tiered knowledge-verification questions (quiz/exam) at 3 difficulty levels with grading and diagnostics. For testing UNDERSTANDING of code, concepts, or architecture — NOT for writing software tests (use engineering:testing-strategy for that). Triggers on "문제 만들어", "quiz", "검증 문제", "이해도 확인", "knowledge check", "challenge me", "시험 문제", "면접 문제".
13swing-review
Devil's Advocate stress-testing for code, architecture, PRs, and decisions. Surfaces hidden flaws through structured adversarial analysis with metacognitive depth. Use for high-stakes review, stress-testing choices, or when the user wants problems found deliberately. NOT for routine code review. Triggers on "스트레스 테스트", "stress test", "devil's advocate", "반론", "이거 괜찮아", "문제 없을까", "깊은 리뷰", "critical review", "adversarial".
10reasoning-tracer
Exposes Claude's reasoning chain as an auditable, decomposable artifact. Forces assumption inventories, decision branching, confidence decomposition, and weakest-link analysis instead of opaque conclusions. Use when the user needs to see WHY a conclusion was reached, what alternatives were considered, and where the reasoning is most fragile. Triggers on "왜 그렇게 생각해", "reasoning", "근거", "show your work", "어떻게 그 결론이", "trace", "판단 근거", "why do you think that".
7pre-mortem
Prospective failure analysis using Gary Klein's pre-mortem technique. Assumes complete failure, works backward to identify risks, leading indicators, and circuit breakers. Counters optimism bias by forcing systematic exploration of failure modes before they materialize. Use for project plans, architecture decisions, technology adoption, business strategy, or feature launches. Triggers on "리스크", "위험", "실패하면", "pre-mortem", "뭐가 잘못될 수 있어", "risk", "what could go wrong", "걱정되는 점", "failure modes", "리스크 분석", "위험 분석".
7swing-trace
Exposes Claude's reasoning chain as an auditable, decomposable artifact. Quick mode (default) gives assumption inventory + weakest-link in 2 stages. Full mode (--full) adds decision branching, confidence decomposition, and falsification conditions. Triggers on "왜 그렇게 생각해", "reasoning", "근거", "show your work", "어떻게 그 결론이", "trace", "판단 근거", "why do you think that".
6