ask-plan-questions
Ask Plan Questions
Identify uncertainty in the plan and ask only the questions that materially reduce implementation risk.
Use the AskQuestion tool for every question.
Do not force a fixed number of questions. Ask only relevant questions, and stop once risk is acceptably low.
Workflow
- Review current context and plan.
- Identify gaps that could cause defects, rework, or delivery delays.
- Prioritize questions by impact and urgency.
- Ask concise, concrete questions with the
AskQuestiontool. - Ask in small batches (1-3 at a time) when many gaps exist.
- Incorporate user answers before asking the next batch.
- Continue until the remaining ambiguity is low-risk.
Question Selection Rules
- Ask only questions that change implementation decisions.
- Skip questions already answered in the thread or files.
- Prefer specific questions over broad prompts.
- Tie each question to one risk area.
- De-prioritize style preferences unless they affect architecture or acceptance.
Question Bank (pick only relevant items)
Use these as templates. Reword for project context.
- What is the single success criterion for this task?
- What is explicitly out of scope for this implementation?
- Which environments must this work in (local, staging, production)?
- Are there hard deadlines or sequencing constraints?
- Which existing behavior must remain unchanged?
- What are the non-negotiable technical constraints (language, framework, versions)?
- What data contracts or schemas are fixed versus negotiable?
- What are the expected edge cases and failure modes?
- What performance/reliability targets must be met?
- What security/privacy/compliance requirements apply?
- What is the source of truth when docs and code disagree?
- What acceptance tests define "done"?
- What level of backward compatibility is required?
- What rollout strategy is expected (flag, staged, immediate)?
- Who is the final approver for tradeoff decisions?
Completion Criteria
Finish questioning when all of the following are true:
- Scope boundaries are clear.
- Constraints and dependencies are explicit.
- Acceptance criteria are testable.
- Known high-risk assumptions are resolved or documented.
More from sebkay/skills
generate-agent-instructions
Generate or update AGENTS.md for AI coding agents by extracting project-specific architecture, workflows, conventions, and integration details from the repository. Use when the user asks to create, refresh, or improve AGENTS.md or agent instructions for a codebase.
23finalise-plan
Stress-test implementation, execution, rollout, migration, or task plans for missing steps, bad sequencing, weak assumptions, hidden dependencies, vague acceptance criteria, and inadequate validation. Use when a plan has just been drafted or revised, when inheriting someone else's plan, before starting implementation, or when a plan needs one last quality pass before execution.
9ui-mockup-fidelity
Implement website UI from mockups with high visual fidelity. Use when building or updating pages/components from mockups where layout, spacing, colors, and section backgrounds must match closely, especially in utility-first CSS workflows such as Tailwind.
9audit-drift
Find state-management bugs across frontend, backend, and persistence layers by auditing impossible state combinations, boolean explosion, magic strings, duplicated or derived state, and source-of-truth violations. Use when reviewing reducers, stores, components, forms, API handlers, jobs, models, or database-backed workflows for inconsistent state, fragile transitions, drift between layers, or requests to audit and auto-fix state bugs.
6review-code
Review code changes for correctness, regressions, maintainability, and test coverage. Use when reviewing a feature, pull request, diff, refactor, or when the user asks for a code review.
6update-laravel-project-overview
Update the project overview file with anything not covered.
3