prompt-engineering

SKILL.md

Prompt Engineering Patterns

Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.

Core Capabilities

1. Few-Shot Learning

Teach the model by showing examples instead of explaining rules. Include 2-5 input-output pairs that demonstrate the desired behavior. Use when you need consistent formatting, specific reasoning patterns, or handling of edge cases. More examples improve accuracy but consume tokens—balance based on task complexity.

Example:

Extract key information from support tickets:

Input: "My login doesn't work and I keep getting error 403"
Output: {"issue": "authentication", "error_code": "403", "priority": "high"}

Input: "Feature request: add dark mode to settings"
Output: {"issue": "feature_request", "error_code": null, "priority": "low"}

Now process: "Can't upload files larger than 10MB, getting timeout"

2. Chain-of-Thought Prompting

Request step-by-step reasoning before the final answer. Add "Let's think step by step" (zero-shot) or include example reasoning traces (few-shot). Use for complex problems requiring multi-step logic, mathematical reasoning, or when you need to verify the model's thought process. Improves accuracy on analytical tasks by 30-50%.

Example:

Analyze this bug report and determine root cause.

Think step by step:

1. What is the expected behavior?
2. What is the actual behavior?
3. What changed recently that could cause this?
4. What components are involved?
5. What is the most likely root cause?

Bug: "Users can't save drafts after the cache update deployed yesterday"

3. Prompt Optimization

Systematically improve prompts through testing and refinement. Start simple, measure performance (accuracy, consistency, token usage), then iterate. Test on diverse inputs including edge cases. Use A/B testing to compare variations. Critical for production prompts where consistency and cost matter.

Example:

Version 1 (Simple): "Summarize this article"
→ Result: Inconsistent length, misses key points

Version 2 (Add constraints): "Summarize in 3 bullet points"
→ Result: Better structure, but still misses nuance

Version 3 (Add reasoning): "Identify the 3 main findings, then summarize each"
→ Result: Consistent, accurate, captures key information

4. Template Systems

Build reusable prompt structures with variables, conditional sections, and modular components. Use for multi-turn conversations, role-based interactions, or when the same pattern applies to different inputs. Reduces duplication and ensures consistency across similar tasks.

Example:

# Reusable code review template
template = """
Review this {language} code for {focus_area}.

Code:
{code_block}

Provide feedback on:
{checklist}
"""

# Usage
prompt = template.format(
    language="Python",
    focus_area="security vulnerabilities",
    code_block=user_code,
    checklist="1. SQL injection\n2. XSS risks\n3. Authentication"
)

5. System Prompt Design

Set global behavior and constraints that persist across the conversation. Define the model's role, expertise level, output format, and safety guidelines. Use system prompts for stable instructions that shouldn't change turn-to-turn, freeing up user message tokens for variable content.

Example:

System: You are a senior backend engineer specializing in API design.

Rules:

- Always consider scalability and performance
- Suggest RESTful patterns by default
- Flag security concerns immediately
- Provide code examples in Python
- Use early return pattern

Format responses as:

1. Analysis
2. Recommendation
3. Code example
4. Trade-offs

Key Patterns

Progressive Disclosure

Start with simple prompts, add complexity only when needed:

  1. Level 1: Direct instruction

    • "Summarize this article"
  2. Level 2: Add constraints

    • "Summarize this article in 3 bullet points, focusing on key findings"
  3. Level 3: Add reasoning

    • "Read this article, identify the main findings, then summarize in 3 bullet points"
  4. Level 4: Add examples

    • Include 2-3 example summaries with input-output pairs

Instruction Hierarchy

[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]

Error Recovery

Build prompts that gracefully handle failures:

  • Include fallback instructions
  • Request confidence scores
  • Ask for alternative interpretations when uncertain
  • Specify how to indicate missing information

Best Practices

  1. Be Specific: Vague prompts produce inconsistent results
  2. Show, Don't Tell: Examples are more effective than descriptions
  3. Test Extensively: Evaluate on diverse, representative inputs
  4. Iterate Rapidly: Small changes can have large impacts
  5. Monitor Performance: Track metrics in production
  6. Version Control: Treat prompts as code with proper versioning
  7. Document Intent: Explain why prompts are structured as they are

Common Pitfalls

  • Over-engineering: Starting with complex prompts before trying simple ones
  • Example pollution: Using examples that don't match the target task
  • Context overflow: Exceeding token limits with excessive examples
  • Ambiguous instructions: Leaving room for multiple interpretations
  • Ignoring edge cases: Not testing on unusual or boundary inputs
Weekly Installs
15
Install
$ npx skills add sickn33/antigravity-awesome-skills --skill "prompt-engineering"
Installed on
antigravity13
cursor10
claude-code9
gemini-cli9
codex8
opencode7