best-practices
Best Practices
Two-Phase Rule
- Phase 1: Research. Dispatch find-docs and/or WebSearch queries.
- Phase 2: Synthesize and act. Only after Phase 1 results arrive.
The user's argument may be a question or an imperative. Imperatives ("refine X", "set up Y") determine what Phase 2 does, not whether Phase 1 happens. Phase 1 always runs.
Red flags indicating you are about to skip research:
| Thought | Reality |
|---|---|
| "I already know this" | Training data goes stale. Config keys get renamed, APIs get deprecated. |
| "The user said to act" | The imperative scopes Phase 2, it does not eliminate Phase 1. |
| "This is a simple lookup" | A 30-second search costs nothing. A wrong recommendation costs a debugging round-trip. |
Workflow
1. Identify Research Targets
Break the topic into 2-4 specific queries targeting distinct aspects (libraries, patterns, configuration, pitfalls). For single-library lookups, call find-docs or WebSearch directly without subagents.
2. Parallel Research
Dispatch one subagent per query in a single message so they run in parallel. Each uses find-docs (Context7) and WebSearch. Be concrete in each subagent prompt: pass library names, version constraints, and the user's specific context. Vague prompts produce vague results.
<subagent_prompt_template> The user wants to [user's task]. We need the latest, authoritative guidance on [specific aspect].
Use the find-docs skill to look up [library/tool] documentation, then use WebSearch to find recent guides and recommendations for "[specific search query]".
<output_format> Report in under 300 words. Include:
- Recommended approach with rationale
- Concrete code/config examples
- Pitfalls to avoid
- Sources consulted (with publication dates)
If you cannot find authoritative guidance on a point, say so explicitly rather than guessing. </output_format> </subagent_prompt_template>
3. Synthesize
Phase check: If no research results have arrived yet, STOP. You are still in Phase 1. Go back to step 2.
After all subagents return, merge using these criteria:
- Deduplicate overlapping recommendations
- Rank by authority: official docs > well-known guides > blog posts > training data
- Flag conflicts with attribution (which source said what)
- Discard stale results: a 2022 guide for a fast-moving framework is noise
If a subagent failed or returned empty, note the gap and proceed with the results you have. Do not block synthesis waiting for a straggler.
4. Present Findings
Deliver to the user in this structure:
- Recommended Approach: the primary recommendation with rationale
- Key Patterns: concrete code/config examples the user can apply immediately
- Pitfalls to Avoid: common mistakes with explanations
- Sources: what was consulted, so the user can dig deeper
Gotchas
- 2-4 focused subagents, not more. Each carries ~20K tokens of startup overhead. Fewer focused queries beat many shallow ones.
- User-provided URLs are additive. If the user provided specific URLs, fetch those too, but they supplement research, not replace it.
- Context7 quota limits exist. If
find-docsfails with quota errors, fall back toWebSearchonly and note the limitation. - If both
find-docsandWebSearchfail, say so explicitly rather than falling back to training data.
More from vinta/hal-9000
commit
Use when making any git commit. Always pass a brief description of what changed as the argument.
73update-allowed-tools
Use when creating or editing a skill that uses Bash commands, external tools, or skill invocations and the allowed-tools frontmatter may be incomplete
49sync-skills
(hal-9000) Use when a skill in skills/ has its name or description changed, or is added or removed — syncs README.md, settings.json, and hal_dotfiles.json
46magi
Use when brainstorming ideas, features, or directions for a project where independent perspectives from different model families (Claude/Codex/Gemini) would surface blind spots and spark creative options the user hasn't considered — especially "what cool things can I add", "what should I build next", "give me ideas for X
44second-opinions
Use when wanting independent perspectives from external models (Codex, Gemini) on code, plans, docs, or any task — or when the user asks for a second opinion, codex review, or gemini review
38explore-codebase
Use when navigating unfamiliar code, tracing call flows or symbol definitions, finding files by name or pattern, or locating all references before refactoring
35