claude-prompt-engineering
Claude Prompt Engineering
Knowledge snapshot from: 2026-02-20
Generated by: cogworks
Overview
This skill provides practical, Claude-specific prompt engineering guidance for Opus 4.6, Sonnet 4.5, and Haiku 4.5. It emphasizes fast model-aware decisions: reasoning mode selection, context management, safe autonomy, tool efficiency, and output control.
When to Use This Skill
Use this skill when you need to:
- Design or review Claude system prompts
- Tune adaptive or extended thinking behavior
- Improve tool orchestration and parallelization
- Handle long-horizon or multi-window workflows
- Add prompt-injection and data-leakage safeguards
- Reduce verbosity while preserving answer quality
Quick Decision Cheatsheet
- Opus 4.6: Use adaptive thinking (
low|medium|high|max) only when task complexity justifies it. - Sonnet 4.5: Use extended thinking for legacy workflows; minimum budget 1024 tokens.
- Simple tasks: Prefer no explicit thinking config.
- Long tasks: Persist state in files + git checkpoints.
- Independent reads/searches: Run tool calls in parallel.
- Risky/irreversible actions: Ask for confirmation first.
- Production exposure: Apply defense-in-depth (input, architecture, output).
Supporting Docs
reference.md: Canonical guidance, decision rules, anti-patterns, and sourcespatterns.md: Reusable patterns and templatesexamples.md: Compact before/after prompt examples
Model Routing Contract
- primary-capability-class: reasoning
- fallback-capability-class: workhorse
- task-to-capability mapping:
- source ingestion/extraction: workhorse
- synthesis/contradiction resolution: reasoning
- final skill drafting: workhorse
- quality gates tied to capability:
- reasoning: resolve contradictions and justify the interpretation
- workhorse: complete structure with citations and no stubs
- runtime model resolution:
- map capability classes to provider/runtime defaults automatically
- never ask the user to choose a model
Invocation
/claude-prompt-engineering
More from williamhallatt/cogworks
cogworks
Start here — turn source material into a validated agent skill. Orchestrates cogworks-encode (synthesis) and cogworks-learn (skill generation).
29cogworks-encode
Use when combining 2+ sources on a single topic to produce a unified, decision-first knowledge base — especially when sources conflict, overlap, or must be mapped to explicit decision rules. Handles multi-source synthesis, contradiction resolution, and cross-source relationship extraction. Does not handle single-source summarization, copy-editing, or format conversion.
25cogworks-learn
Generate and validate agent skill files (SKILL.md, reference.md, metadata). Enforces structural contracts, quality gates, and runtime compatibility.
24skill-evaluation
Guides systematic evaluation of Claude Code skills through eval-driven development, SMART success criteria, layered grading (deterministic then LLM-as-judge then human), four-category test datasets with negative controls, and observable behavior checks. Apply when designing skill tests, defining quality metrics, building test cases, grading skill outputs, choosing graders, calibrating LLM judges, or assessing whether a skill is production-ready. Use when asked "how do I test this skill", "is this ready to ship", or "what should my success criteria be".
6codex-prompt-engineering
Optimize Codex/GPT prompts for gpt-5.1, gpt-5.2, and gpt-5.2-codex with calibrated reasoning effort, autonomous execution patterns, correct tool contracts (apply_patch, exec_command, update_plan), compact outputs, evaluation flywheel loops, and production security controls.
4