start-task
This skill uses Claude hooks which can execute code automatically in response to events. Review carefully before installing.
Start Task (SAM Task Execution Helper)
You are implementing a specific task from a SAM task file.
<task_input> $ARGUMENTS </task_input>
Parse Arguments
task_file_path(required): path to aplan/tasks-*.mdfile--task <id>(optional): Task ID to start (defaults to first ready task)--complete <id>(optional): Task ID to mark COMPLETE
If --complete <task-id> Provided
- Run
mcp__plugin_dh_sam__sam_state(plan="P{N}", task="T{M}", status="complete")to mark the task complete. - Output:
Task {ID} marked as complete
Starting a Task
-
Read the task assignment via the SAM MCP tool:
mcp__plugin_dh_sam__sam_read(plan="P{N}", task="T{M}")The response is a
TaskAssignmentJSON object containing:plan.goal— the overall feature goalplan.context— plan-level context manifest (architecture decisions, codebase notes)task— full task details: title, requirements, constraints, acceptance criteria, verification stepstask.skills— skill names to load before implementing
Use the address form
P{N}/T{M}whereNis the plan number andMis the task number from the--taskargument.
1a. Discover plan artifacts via manifest (when issue number is known):
If the TaskAssignment JSON contains a parent_issue_number or the plan has an issue field, query the artifact manifest to discover available plan artifacts:
mcp__plugin_dh_backlog__artifact_list(issue_number=N)
If the response contains artifacts (non-empty artifacts list), use artifact_read to fetch the architect spec and feature context content:
mcp__plugin_dh_backlog__artifact_read(issue_number=N, artifact_type="architect")
mcp__plugin_dh_backlog__artifact_read(issue_number=N, artifact_type="feature-context")
Use the returned content as context for implementation instead of reading filesystem paths directly. This is especially important for worktree-isolated agents that cannot access uncommitted plan files from the root worktree.
Fallback: If artifact_list returns an empty manifest (no artifacts entries) or an error, fall back to filesystem path conventions (dh_paths.plan_dir() / "architect-{slug}.md" and dh_paths.plan_dir() / "feature-context-{slug}.md"). This ensures backward compatibility with issues that predate the artifact manifest system.
- Select the task:
- If
--taskprovided, use that ID - Else pick the first task where status is
not-startedand all dependencies are resolved (checktask.dependenciesin the TaskAssignment)
- If
2a. Load task-level skills (if present):
- Read
task.skillsfrom theTaskAssignmentJSON (an array of skill names). - If absent or empty, skip (backward compatible with older task files).
- For each skill name, invoke:
Skill(skill="{skill-name}") - If a skill fails to load, log a warning and continue. Do not abort task execution.
- Redundancy note: The orchestrator (
/implement-feature) may also include skill-loading instructions in the delegation prompt. This is intentional redundancy — loading a skill twice is a no-op. - Task-level skills are additive to any skills already declared in the agent definition's frontmatter.
-
Claim the task (prevents duplicate dispatch):
Use
sam_claimMCP tool. This is the ONLY permitted way to mark a task in-progress. Do NOT edit status or started fields directly with the Edit tool.mcp__plugin_dh_sam__sam_claim(plan="P{N}", task="T{M}")If the response contains
"claimed": false:- The task was already claimed by another agent, or is complete, or could not be found.
- Output the full JSON result for the orchestrator.
- STOP. Do not proceed with implementation. Do not write the context file.
- The orchestrator's hook will detect the stop and the task remains in its current state.
If the response contains
"claimed": true:- The task is claimed.
status: in-progressandstarted:are written on disk. - Proceed to step 4 (write context file) and step 5 (implement).
-
Write the active-task context file (required for hook-driven updates). The context directory is resolved via
dh_paths.context_dir(session_id):
# Python (preferred — uses dh_paths):
import json
from dh_paths import context_dir
ctx = context_dir(session_id="${CLAUDE_SESSION_ID}")
ctx.mkdir(parents=True, exist_ok=True)
(ctx / f"active-task-${CLAUDE_SESSION_ID}.json").write_text(
json.dumps({"task_file_path": "{task_file_path}", "task_id": "{task_id}", "parent_issue_number": N})
)
# Shell fallback (when Python not available):
mkdir -p "$(python3 -c 'from dh_paths import context_dir; print(context_dir("${CLAUDE_SESSION_ID}"))')"
printf '%s' '{"task_file_path": "{task_file_path}", "task_id": "{task_id}", "parent_issue_number": N}' \
> "$(python3 -c 'from dh_paths import context_dir; print(context_dir("${CLAUDE_SESSION_ID}") / "active-task-${CLAUDE_SESSION_ID}.json")')"
Omit parent_issue_number if the story issue number is not known. The hook treats absence as
None and skips GitHub sync.
If parent_issue_number is known and github_issue field is set in the task YAML, call
backlog_core.github.update_task_status(repo, github_issue, "in-progress") after the
claim-task step to sync the in-progress status to GitHub. Failure is non-fatal — continue
regardless.
-
Record divergence observations during implementation.
While implementing, if you discover that the architect spec or feature-context describes something that does not match what you are implementing, append a divergence note to the task file under a
## Divergence Notessection.When to record: Record a divergence note when ALL of these hold:
- You are implementing something that differs from what the architect spec or feature-context describes
- The difference is not a trivial implementation detail (e.g., different variable name, different import path)
- The difference affects the observable behavior, structure, or scope of the feature
Divergence note format:
## Divergence Notes
### DN-1: {Brief title}
- **Plan artifact**: ~/.dh/projects/{project-slug}/plan/architect-{slug}.md, section "{section name}"
- **Plan claim**: "{quoted text from plan artifact}"
- **Actual implementation**: "{what was actually done and why}"
- **Classification**: design-refinement | intent-divergence
- **Recorded**: {ISO timestamp}
After appending a note, update divergence-notes: {count} in YAML frontmatter
(or add **Divergence Notes**: {count} in legacy format).
For full artifact classification rules and divergence thresholds, see plan-artifact-lifecycle.md.
-
Commit message restriction — Fixes #N trailers are PROHIBITED in task-level commits.
Task-level commits must NEVER include
Fixes #N,Closes #N, orResolves #Ntrailers. These trailers trigger automatic GitHub issue closure. Issue closure is handled exclusively by/complete-implementationin its final commit step, after all quality gates pass. Including these trailers in task commits causes premature issue closure before verification is complete. -
Implement against the task acceptance criteria and run its verification steps.
More from jamie-bitflight/claude_skills
perl-lint
This skill should be used when the user asks to lint Perl code, run perlcritic, check Perl style, format Perl code, run perltidy, or mentions Perl Critic policies, code formatting, or style checking.
24brainstorming-skill
You MUST use this before any creative work - creating features, building components, adding functionality, modifying behavior, or when users request help with ideation, marketing, and strategic planning. Explores user intent, requirements, and design before implementation using 30+ research-validated prompt patterns.
11design-anti-patterns
Enforce anti-AI UI design rules based on the Uncodixfy methodology. Use when generating HTML, CSS, React, Vue, Svelte, or any frontend UI code. Prevents "Codex UI" — the generic AI aesthetic of soft gradients, floating panels, oversized rounded corners, glassmorphism, hero sections in dashboards, and decorative copy. Applies constraints from Linear/Raycast/Stripe/GitHub design philosophy: functional, honest, human-designed interfaces. Triggers on: UI generation, dashboard building, frontend component creation, CSS styling, landing page design, or any task producing visual interface code.
7python3-review
Comprehensive Python code review checking patterns, types, security, and performance. Use when reviewing Python code for quality issues, when auditing code before merge, or when assessing technical debt in a Python codebase.
7hooks-guide
Cross-platform hooks reference for AI coding assistants — Claude Code, GitHub Copilot, Cursor, Windsurf, Amp. Covers hook authoring in Node.js CJS and Python, per-platform event schemas, inline-agent hooks and MCP in agent frontmatter, common JSON I/O, exit codes, best practices, and a fetch script to refresh docs from official sources. Use when writing, reviewing, or debugging hooks for any AI assistant.
7agent-creator
Create high-quality Claude Code agents from scratch or by adapting existing agents as templates. Use when the user wants to create a new agent, modify agent configurations, build specialized subagents, or design agent architectures. Guides through requirements gathering, template selection, and agent file generation following Anthropic best practices (v2.1.63+).
6