prompt-factory
Prompt Factory - World-Class Prompt Powerhouse
Generate production-ready mega-prompts through intelligent questioning and comprehensive domain presets.
⚠️ CRITICAL CONSTRAINTS - READ FIRST
This skill generates PROMPTS only. It does NOT implement the work described in the prompt.
What This Skill DOES:
✅ Generate a comprehensive PROMPT (text document in chosen format) ✅ Ask 5-7 questions to understand requirements (MANDATORY - no skipping) ✅ Validate prompt quality before delivery ✅ Output a SINGLE prompt document with token count ✅ Provide the prompt ready to copy and use elsewhere
What This Skill DOES NOT Do:
❌ Implement the actual work (no code files, no diagrams, no APIs) ❌ Create architectural diagrams or technical implementations ❌ Write actual marketing campaigns or business strategies ❌ Build infrastructure or deploy anything ❌ Execute the prompt after generating it
Expected Workflow:
- User asks for help creating a prompt
- Skill MUST ask 5-7 questions (even if context seems obvious)
- User answers questions with specific details
- Skill generates ONE comprehensive prompt document
- Skill announces token count
- STOP - Do not implement anything from the prompt
- Ask: "Would you like me to modify the prompt or create a variation?"
Gotchas & Common Pitfalls
1. Skipping Questions
Danger: Agent rushes to prompt generation without asking questions
- Why it breaks: Vague prompts miss key context, leading to poor outputs when user applies them
- Fix: ALWAYS ask 5-7 questions minimum, even for "obvious" requests
- Example: User says "PM prompt for PRD" but doesn't specify domain, constraints, or team context
- Wrong: Jump to generate PM preset
- Right: Ask: domain, PRD type, team size, constraints, success criteria
2. Implementing Instead of Generating
Danger: Agent starts coding/building the thing the prompt describes
- Why it breaks: Scope creep; this skill only makes prompts, not implementations
- Fix: Generate the prompt → Stop → Offer variations only
- Example: Prompt describes "REST API for payments"
- Wrong: Start coding the API
- Right: Generate prompt → User takes prompt elsewhere to implement
3. Conflating with PROMPTS_FACTORY_PROMPT.md
Danger: User asks "make a prompt system for FinTech" and you generate one mega-prompt
- Why it breaks: That's a meta-prompt task (building a prompt builder), not a single role prompt
- Fix: Redirect to PROMPTS_FACTORY_PROMPT.md for domain-wide builders
- Difference:
- This skill: One prompt for one role (e.g., "FinTech PM") → ~5K tokens
- Meta-prompt: Entire FinTech system with 10-20 role presets → generate with PROMPTS_FACTORY_PROMPT.md
4. Token Count Inflation
Danger: Generated prompts exceed optimal ranges (core: 3-6K, advanced: 8-12K)
- Why it breaks: Oversized prompts are harder to use; undersized prompts lack specificity
- Fix: Apply optimization tips from
references/prompt-patterns.md - Monitor: Announce token count; flag if core >8K or advanced >15K
5. Generating Multiple Formats by Default
Danger: Generating all 4 formats when user asked for one
- Why it breaks: Token bloat; user only needs one format
- Fix: Ask for format preference first; only generate all formats if user requests
- Default: XML (optimal for LLM parsing)
6. Weak Question Flow
Danger: Asking 7 generic questions, not 5-7 contextually relevant ones
- Why it breaks: Questions don't validate assumptions; responses lack specificity
- Fix: Use smart adaptation in
references/advanced-workflow.md→ Skip truly redundant questions, emphasize domain/constraints - Example: If user specifies "React 18 + TypeScript," skip tech stack question
- Still ask: domain, task, constraints, success criteria (always validate)
Quick Start: Choose Your Path
Path 1: Quick-Start Preset (Fastest)
Use when: You need a prompt for a common role
- User says: "I need a prompt for [preset name]"
- Confirm the preset matches their need
- Customize any variables (optional)
- Generate → Deliver with token count
- Ask: "Want a variation or different format?"
Available Presets: 69 across 15 domains (see references/presets.md)
Path 2: Custom Prompt (5-7 Questions - MANDATORY)
Use when: Building a unique prompt from scratch
- Detect intent from user request
- MUST ask 5-7 questions with example answers (mandatory - even for "obvious" requests)
- Apply contextual best practices
- Validate quality → Deliver with token count
- Ask: "Want to modify the prompt or try a different format?"
Full 8-step workflow: See references/advanced-workflow.md
Expected Output Formats
Choose one or generate all:
- XML (default) — Optimal for LLM parsing with structured tags
- Claude — Claude system prompt format
- ChatGPT — Custom instructions format
- Gemini — Google Gemini format
- All — Generate all 4 formats (use sparingly - token heavy)
See references/examples.md for complete examples of each format.
Generation Modes
- Core (default) — Prompt + instructions + 2-3 examples (~4-6K tokens)
- Advanced — Core + testing scenarios + variations + optimization tips (~10K tokens)
Quality Validation (7-Point Gates)
Before delivery, validate:
- ✓ XML Structure valid (if XML format)
- ✓ Completeness — All questions incorporated
- ✓ Token Count optimal (core: 3-6K, advanced: 8-12K)
- ✓ No Placeholders — All
[...]filled - ✓ Actionable Workflow — Clear, executable steps
- ✓ Best Practices applied contextually
- ✓ Examples present (minimum 2)
If validation fails: Fix before delivery.
Token Count Announcement
After generating, announce:
- "Token Count: ~4,200 tokens (Core mode - within optimal range ✅)"
- "Token Count: ~10,500 tokens (Advanced mode - comprehensive ✅)"
- "Token Count: ~7,800 tokens (Warning: Higher than typical Core mode)"
Reference Files
Read on-demand based on context:
| File | When to Use | Contains |
|---|---|---|
references/presets.md |
User asks for quick-start | 69 preset names, domains, use cases |
references/advanced-workflow.md |
Custom prompt path (Path 2) | 8-step workflow, detailed questions, validation |
references/prompt-patterns.md |
During generation | Best practices (OpenAI/Anthropic/Google), pattern library, template matching |
references/examples.md |
Format clarification | 2-3 worked examples per format (XML, Claude, ChatGPT, Gemini) |
Modern Prompting Techniques (2025/2026)
When generating prompts, incorporate these proven techniques as appropriate:
Chain-of-Thought (CoT)
For reasoning tasks, instruct the model to think step-by-step:
<instructions>
Think through this step-by-step before giving your final answer.
Show your reasoning process explicitly.
</instructions>
Structured Outputs
For prompts that need machine-parseable responses:
<output_format>
Respond ONLY with valid JSON matching this schema:
{"result": string, "confidence": number, "reasoning": string}
</output_format>
Few-Shot Examples
Always include 2-3 high-quality examples for consistent behavior:
<examples>
<example>
<input>Schedule a meeting for next Tuesday</input>
<output>{"action": "create_event", "day": "next_tuesday", "title": "Meeting"}</output>
</example>
</examples>
Role + Context + Task Structure (optimal for Claude)
<role>You are a senior {domain} specialist with 15 years of experience at top-tier firms.</role>
<context>You are helping {user_type} with {specific_situation}.</context>
<task>{specific_instructions}</task>
<constraints>{limitations_and_guardrails}</constraints>
<output_format>{format_specification}</output_format>
Negative Instructions
Explicitly stating what NOT to do reduces unwanted behaviors:
<constraints>
- Do NOT include unsolicited advice beyond the specific question
- Do NOT use jargon without explanation
- Do NOT recommend third-party tools unless asked
</constraints>
Model-Specific Formatting Notes
- Claude: Responds best to XML tags; prefers explicit thinking steps; use
<thinking>for CoT - GPT-4.1: Responds well to markdown; system prompt is highly influential
- Gemini 2.5: Responds well to numbered lists; excels with explicit JSON schema specs
Common Questions
"Do you implement the prompt after generating it?" → No. This skill makes prompts only. Once generated, you take it elsewhere to implement or use with another LLM.
"Can I get a variation of this prompt?" → Yes. Ask for "concise version," "more detailed," "different tone," or "different format." We'll modify the prompt (not reimplement it).
"What if the generated prompt doesn't work?" → We can refine it. Share what didn't work, and we'll adjust specific sections, not rebuild from scratch.
"Why so many questions?" → Good prompts need context. 5-7 questions ensure we capture domain specifics, constraints, and success criteria—making the final prompt much more effective.
Ready to generate a world-class prompt? Let's start with a preset name or custom request.
More from ckorhonen/claude-skills
video-editor
Expert guidance for video editing with ffmpeg, encoding best practices, and quality optimization. Use when working with video files, transcoding, remuxing, encoding settings, color spaces, or troubleshooting video quality issues.
63tui-designer
Design and implement retro/cyberpunk/hacker-style terminal UIs. Covers React (Tuimorphic), SwiftUI (Metal shaders), and CSS approaches. Use when creating terminal aesthetics, CRT effects, neon glow, scanlines, phosphor green displays, or retro-futuristic interfaces.
35practical-typography
Professional typography guidance based on Matthew Butterick's Practical Typography. Use when evaluating, critiquing, or improving document formatting, text layout, font choices, punctuation, spacing, or any typography-related decisions for print or web content.
34app-marketing-copy
Write marketing copy and App Store / Google Play listings (ASO keywords, titles, subtitles, short+long descriptions, feature bullets, release notes), plus screenshot caption sets and text-to-image prompt templates for generating store screenshot backgrounds/promo visuals. Use when asked to: write/refresh app marketing copy, craft app store metadata, brainstorm taglines/value props, produce ad/landing/email copy, or generate prompts for screenshot/creative generation.
33markdown-fetch
Fetch and extract web content as clean Markdown when provided with URLs. Use this skill whenever a user provides a URL (http/https link) that needs to be read, analyzed, summarized, or extracted. Converts web pages to Markdown with 80% fewer tokens than raw HTML. Handles all content types including JS-heavy sites, documentation, articles, and blog posts. Supports three conversion methods (auto, AI, browser rendering). Always use this instead of web_fetch when working with URLs - it's more efficient and provides cleaner output.
26llm-advisor
Consult other LLMs (GPT-4.1, o4-mini, Gemini 2.5 Pro, Claude Opus) for second opinions on complex bugs, hard problems, planning, and architecture decisions. Use proactively when stuck for 15+ minutes or facing complex debugging. Use when user says 'ask Gemini/GPT/Claude about X' or 'get a second opinion'.
22