execute
Execute
Receive a task, classify its difficulty, think proportionally, and start working immediately.
<output_language>
Default all user-facing deliverables, saved artifacts, reports, plans, generated docs, summaries, handoff notes, commit/message drafts, and validation notes to Korean, even when this canonical skill file is written in English.
Preserve source code identifiers, CLI commands, file paths, schema keys, JSON/YAML field names, API names, package names, proper nouns, and quoted source excerpts in their required or original language.
Use a different language only when the user explicitly requests it, an existing target artifact must stay in another language for consistency, or a machine-readable contract requires exact English tokens. If a localized template or reference exists (for example *.ko.md or *.ko.json), prefer it for user-facing artifacts.
</output_language>
<request_routing>
Positive triggers
- A direct task instruction with a clear deliverable: "add pagination to the user list", "implement dark mode toggle".
- An explicit execution request: "do this", "build this", "make this work".
- A scoped feature or change request that does not require extended planning: "refactor this", "add tests", "clean up this component".
Out-of-scope
- Bug reports with error messages or failing symptoms. Route to
bug-fix. - Repository-wide build, CI, or deployment failures. Route to
deploy-fix. - Pre-release validation or build readiness checks. Route to
pre-deploy. - Strategic planning or architecture decisions. Route to a dedicated planning or architecture skill when available; in this repo prefer
prd-makerfor requirements and framework-specific architecture skills for implementation architecture. - Code review or quality audit. Route to a dedicated review or QA skill when available; in this repo prefer
qafor systematic QA work. - Security analysis. Route to a dedicated security skill when available; in this repo use framework-specific security skills such as
tanstack-start-securitywhen applicable. - Explicit workflow invocations such as
$autoresearch-skill,$ralph, or another$skillrequest. Preserve the explicitly requested workflow instead of treating the prompt as a generic execute task.
Boundary cases
- If the request mixes a bug fix with new work, execute owns it when the primary intent is the new work.
- If the task scope is genuinely unclear (no deliverable identifiable), ask one clarifying question — then execute.
- If the user asks for a persistent guaranteed-completion loop ("keep going until done", "until max score", or Ralph-style repetition), route to Ralph when available rather than silently downgrading it to one-shot execute.
- If the task turns out to require architectural decisions mid-flight, pause and consult the user rather than guessing.
</request_routing>
<argument_validation>
If ARGUMENT is missing or too vague to identify a deliverable, ask briefly:
What should I execute?
- Task or feature to implement
- Target files or area
- Any constraints or requirements
Do not over-interrogate. One round of clarification maximum, then start working.
</argument_validation>
<difficulty_classification>
Classify before thinking. Use these signals:
| Difficulty | Signals | Thinking depth |
|---|---|---|
| Easy | Single file, clear scope, familiar pattern, mechanical change | 1-3 thoughts |
| Medium | Multi-file, some ambiguity, moderate scope, requires context gathering | 4-6 thoughts |
| Hard | Cross-cutting, architectural impact, unfamiliar domain, complex interactions | 7+ thoughts |
For compound tasks (e.g. "refactor + add tests"), classify by the hardest sub-task. Treat the compound as one deliverable, not separate jobs.
When uncertain, round up one level. It is cheaper to over-think slightly than to redo work.
</difficulty_classification>
<mandatory_reasoning>
Adaptive Sequential Thinking
Always run sequential-thinking before implementation. The number of thoughts scales with difficulty:
Easy (1-3 thoughts):
- What exactly needs to change
- Where to change it
- How to verify
Medium (4-6 thoughts):
- Scope and deliverable clarity
- Relevant code exploration plan
- Implementation approach
- Edge cases or risks
- Verification strategy
- (Optional) Alternative approach comparison
Hard (7+ thoughts):
- Scope and deliverable clarity
- Codebase context and dependencies
- Design approach
- Implementation breakdown
- Edge cases and failure modes
- Cross-cutting impact
- Verification strategy 8+ (as needed) Revision, branching, deeper analysis
Announce the classification briefly before starting:
Difficulty: [easy/medium/hard] — [one-line reason]
</mandatory_reasoning>
<execution_rules>
Core principle: act, don't deliberate
- Start implementing after thinking. Do not present options or wait for confirmation.
- If a decision point arises where both paths are reasonable, pick the simpler one and note it.
- Only pause for user input when the task itself is ambiguous (what to do), not when the approach is ambiguous (how to do it).
- Keep scope to what was asked. Do not add unrequested improvements.
Implementation
- Read relevant code before editing.
- Make targeted changes — smallest diff that achieves the deliverable.
- Run targeted validation after changes (typecheck, test, build as appropriate).
- If validation fails, fix it within scope. Do not leave broken state.
</execution_rules>
| Step | Task | Tool |
|---|---|---|
| 1 | Validate input — identify the deliverable | - |
| 2 | Classify difficulty (easy/medium/hard) | - |
| 3 | Think proportionally | sequential-thinking |
| 4 | Explore relevant code | Read/Grep/Glob |
| 5 | Implement | Edit/Write |
| 6 | Validate (typecheck/test/build) | Bash |
| 7 | Report outcome and changed files | - |
Steps 4-6 may repeat as needed. The goal is a working deliverable, not a single pass.
<completion_report>
After execution, report briefly:
## Done
**Task**: [what was executed]
**Difficulty**: [easy/medium/hard]
**Changes**: [list of changed files]
**Validation**: [what was verified and result]
If anything remains unverified, say what and why.
</completion_report>
Execution checklist:
- ARGUMENT validated — deliverable is clear
- Difficulty classified
- sequential-thinking completed (proportional depth)
- Relevant code read before editing
- Implementation complete
- Validation executed (typecheck/test/build)
- Outcome reported with changed files
Forbidden:
- Presenting options and waiting for selection (this is execute, not diagnose)
- Over-thinking easy tasks (1-3 thoughts max for easy)
- Under-thinking hard tasks (7+ thoughts minimum for hard)
- Expanding scope beyond what was asked
- Claiming completion without running validation
More from alpoxdev/hypercore
bug-fix
[Hyper] Analyze bugs, present repair options, then implement and verify the user-selected fix path. Routes simple bugs directly; tracks complex multi-phase investigations via .hypercore/bug-fix/ JSON flow.
47tanstack-start-architecture
[Hyper] Enforce TanStack Start architecture in existing Start projects, especially route structure, server functions, loader/client-server boundaries, importProtection, hooks, SSR/hydration, and hypercore conventions. Use before structural code changes, route work, server function work, or architecture audits in TanStack Start codebases.
45gemini
[Hyper] Use when the user wants to invoke Google Gemini CLI (`gemini`) for reasoning, research, or AI assistance. Trigger phrases: \"use gemini\", \"ask gemini\", \"run gemini\", \"call gemini\", \"gemini cli\", \"Google AI\", \"Gemini reasoning\", or when users request Google's Gemini models, research with web search, plan-mode review, or want to resume a previous Gemini session. Do not use for generic writing, runbook cleanup, or local edits that do not require the Gemini CLI.
45crawler
[Hyper] Investigate websites with Playwriter plus CDP to choose a crawl strategy, capture API/auth evidence, document findings under `.hypercore/crawler/[site]/`, and generate crawler code only after discovery is grounded.
45research
[Hyper] Produce a multi-source, source-backed markdown research report for fact-finding, comparisons, market/trend analysis, or evidence-backed recommendations across live web, official docs, GitHub, and local repo sources. Use when synthesis and citations are needed, not for one-source lookups.
45genius-thinking
[Hyper] Generate and prioritize differentiated ideas for stuck product, strategy, or innovation problems when ordinary brainstorming is too shallow. Saves structured multi-file analysis under .hypercore/genius-thinking/[topic-slug]/ with phase tracking.
44