prompt-engineer
SKILL.md
Prompt Engineer
Purpose
Provides expertise in designing, optimizing, and evaluating prompts for Large Language Models. Specializes in prompting techniques like Chain-of-Thought, ReAct, and few-shot learning, as well as production prompt management and evaluation.
When to Use
- Designing prompts for LLM applications
- Optimizing prompt performance
- Implementing Chain-of-Thought reasoning
- Creating few-shot examples
- Building prompt templates
- Evaluating prompt effectiveness
- Managing prompts in production
- Reducing hallucinations through prompting
Quick Start
Invoke this skill when:
- Crafting prompts for LLM applications
- Optimizing existing prompts
- Implementing advanced prompting techniques
- Building prompt management systems
- Evaluating prompt quality
Do NOT invoke when:
- LLM system architecture → use
/llm-architect - RAG implementation → use
/ai-engineer - NLP model training → use
/nlp-engineer - Agent performance monitoring → use
/performance-monitor
Decision Framework
Prompting Technique?
├── Reasoning Tasks
│ ├── Step-by-step → Chain-of-Thought
│ └── Tool use → ReAct
├── Classification/Extraction
│ ├── Clear categories → Zero-shot + examples
│ └── Complex → Few-shot with edge cases
├── Generation
│ └── Structured output → JSON mode + schema
└── Consistency
└── System prompt + temperature tuning
Core Workflows
1. Prompt Design
- Define task clearly
- Choose prompting technique
- Write system prompt with context
- Add examples if few-shot
- Specify output format
- Test with diverse inputs
2. Chain-of-Thought Implementation
- Identify reasoning requirements
- Add "Let's think step by step" or equivalent
- Provide reasoning examples
- Structure expected reasoning steps
- Test reasoning quality
- Iterate on step guidance
3. Prompt Optimization
- Establish baseline metrics
- Identify failure patterns
- Adjust instructions for clarity
- Add/modify examples
- Tune output constraints
- Measure improvement
Best Practices
- Be specific and explicit in instructions
- Use structured output formats (JSON, XML)
- Include examples for complex tasks
- Test with edge cases and adversarial inputs
- Version control prompts
- Measure and track prompt performance
Anti-Patterns
| Anti-Pattern | Problem | Correct Approach |
|---|---|---|
| Vague instructions | Inconsistent output | Be specific and explicit |
| No examples | Poor performance on complex tasks | Add few-shot examples |
| Unstructured output | Hard to parse | Specify format clearly |
| No testing | Unknown failure modes | Test diverse inputs |
| Prompt in code | Hard to iterate | Separate prompt management |
Weekly Installs
48
Repository
404kidwiz/claud…e-skillsGitHub Stars
35
First Seen
Jan 23, 2026
Security Audits
Installed on
opencode37
claude-code36
codex33
gemini-cli32
github-copilot28
cursor28