clarify
Clarify - Intensive Questioning (v2.37)
Systematically gather requirements using TLDR semantic search + AskUserQuestion tool.
v2.88 Key Changes (MODEL-AGNOSTIC)
- Model-agnostic: Uses model configured in
~/.claude/settings.jsonor CLI/env vars - No flags required: Works with the configured default model
- Flexible: Works with GLM-5, Claude, Minimax, or any configured model
- Settings-driven: Model selection via
ANTHROPIC_DEFAULT_*_MODELenv vars
Quick Start
/clarify # Start intensive questioning for current task
Pre-Clarification: TLDR Semantic Search (v2.37)
AUTOMATIC - Before asking questions, use semantic search to understand existing code:
# Find existing related functionality (95% token savings)
tldr semantic "$USER_TASK_KEYWORDS" .
# Example: For "add authentication", find existing auth code
tldr semantic "authentication login session user password" .
# Get structure overview for context
tldr structure . --lang "$PRIMARY_LANGUAGE"
This helps formulate better questions based on what already exists in the codebase.
Aristotle-First Clarification (v3.0)
Before asking structured questions, apply Aristotle Phase 1 (Assumption Autopsy):
- What assumptions are embedded in the user's request? Identify inherited framing.
- What clarifications challenge assumptions vs confirm them? Prioritize assumption-challenging questions.
- What would change if the core assumption is wrong? This identifies the highest-value clarification.
Example: User says "optimize database queries". Assumption Autopsy reveals: "We assume queries are the bottleneck, not the schema design or the caching layer." The first MUST_HAVE question should challenge this assumption.
Workflow
MUST_HAVE Questions (Blocking)
These MUST be answered before proceeding:
AskUserQuestion:
questions:
- question: "What is the primary goal of this feature?"
header: "Goal"
multiSelect: false
options:
- label: "New user-facing feature"
- label: "Internal refactoring"
- label: "Bug fix"
- label: "Performance optimization"
Categories to Cover
-
Functional Requirements
- What exactly should this do?
- What are inputs/outputs?
- Edge cases?
-
Technical Constraints
- Existing patterns to follow?
- Technology preferences?
- Performance requirements?
-
Integration Points
- Existing code interactions?
- APIs to maintain?
- Database changes?
-
Testing & Validation
- How will this be tested?
- Acceptance criteria?
-
Deployment
- Feature flags needed?
- Rollback strategy?
NICE_TO_HAVE Questions
Accept defaults but still ask:
AskUserQuestion:
questions:
- question: "Implementation preferences?"
header: "Approach"
multiSelect: true
options:
- label: "Minimal changes"
- label: "Include tests"
- label: "Add documentation"
Question Templates
Goal Clarification
AskUserQuestion:
questions:
- question: "What problem does this solve?"
header: "Problem"
options:
- label: "User pain point"
description: "Direct user-facing issue"
- label: "Technical debt"
description: "Code maintainability"
- label: "Performance issue"
description: "Speed/resource usage"
- label: "Security concern"
description: "Vulnerability fix"
Scope Definition
AskUserQuestion:
questions:
- question: "What is the scope?"
header: "Scope"
options:
- label: "Single file"
- label: "Single module"
- label: "Multiple modules"
- label: "Cross-system"
Priority
AskUserQuestion:
questions:
- question: "Priority level?"
header: "Priority"
options:
- label: "Critical (blocking)"
- label: "High (this sprint)"
- label: "Medium (this quarter)"
- label: "Low (backlog)"
Integration
- Invoked by /orchestrator in Step 1
- Pre-step: tldr semantic search (automatic in v2.37)
- Must complete before CLASSIFY step
- Results inform plan complexity
TLDR Integration (v2.37)
| Phase | TLDR Command | Purpose |
|---|---|---|
| Before questions | tldr semantic "$KEYWORDS" . |
Find related code |
| Context gathering | tldr structure . |
Codebase overview |
| Dependency check | tldr deps "$FILE" . |
Impact analysis |
Agent Teams Integration (v2.88)
Optimal Scenario: Pure Agent Teams (Native)
This skill uses Pure Agent Teams with native coordination - no custom subagent specialization needed.
Why Scenario A for This Skill
- Clarification is primarily sequential questioning workflow
- AskUserQuestion is the primary tool, available to all agents
- No specialized parallel research requirements
- Native agent types sufficient for requirement gathering
- Lower complexity, faster execution
Configuration
- TeamCreate: Optional, for simple clarification tasks
- Task: Use native agent types (no ralph-* needed)
- Hooks: TeammateIdle + TaskCompleted available if needed
- Simple: Minimal setup overhead
Workflow Pattern
TeamCreate (optional)
→ AskUserQuestion for requirements
→ Native agent executes clarification
→ Complete
When This Is Sufficient
- Sequential requirement gathering
- Simple clarification workflows
- No specialized research needed
- Quick interactive sessions preferred
Anti-Patterns
- Never proceed with unanswered MUST_HAVE questions
- Never assume user intent
- Never skip clarification for features
- Never ask more than 4 questions at once (AskUserQuestion limit)
More from alfredolopez80/multi-agent-ralph-loop
stop-slop
A skill for removing AI-generated writing patterns ('slop') from prose. Eliminates telltale signs of AI writing like filler phrases, excessive hedging, overly formal language, and mechanical sentence structures. Use when: writing content that should sound human and natural, editing AI-generated drafts, cleaning up prose for publication, or any content that needs to sound authentic rather than AI-generated. Triggers: 'stop-slop', 'remove AI tells', 'clean up prose', 'make it sound human', 'edit AI writing'.
10gemini-cli
|
2minimax
Custom skill for minimax
1security
Security audit with Codex + MiniMax second opinion. Integrates ralph-security agent (6 quality pillars, OWASP A01-A10). Uses LSP for code navigation during analysis. Use when: (1) /security is invoked, (2) task relates to security functionality.
1adr
Architecture Decision Records management. Actions: create (new ADR), list (show all), search (find by keyword). Use when: (1) making architecture decisions, (2) choosing between technologies, (3) documenting trade-offs. Triggers: /adr, 'architecture decision', 'decision record', 'document decision'.
1gates
9-language quality gate validation: linting, formatting, type checking, and test execution. Validates code changes meet quality standards before completion. Use when: (1) after code implementation, (2) before PR creation, (3) as part of /orchestrator Step 6, (4) manual quality check. Triggers: /gates, 'quality gates', 'run validation', 'check quality', 'validate code'.
1