gpt-prompting
GPT Prompting (GPT-5.2)
Use this skill to turn vague “be helpful” prompting into predictable, evaluable behavior.
If you need the source guides, see:
- https://developers.openai.com/cookbook/examples/gpt-5/gpt-5-2_prompting_guide
- https://developers.openai.com/cookbook/examples/gpt-5/gpt-5_prompting_guide (agentic eagerness + tool preambles)
- https://developers.openai.com/api/docs/guides/structured-outputs (schema enforcement)
For the block library + examples, read: references/guide.md
Quick start (recommended flow)
- State the job + constraints (what success is, what not to do).
- Add a verbosity/output-shape clamp.
- Add risk rails (ambiguity + hallucination guard).
- If tools exist: add tool usage rules and a post-write change recap.
- If extracting data: add an extraction schema with null-for-missing.
Drop-in prompt skeleton
Use as a starting point for system prompts / instruction blocks:
You are an expert assistant.
<output_verbosity_spec>
- Default: 3–6 sentences OR ≤5 bullets.
- Simple questions: ≤2 sentences.
- Complex tasks: 1 short overview paragraph, then ≤5 bullets tagged:
What changed, Where, Risks, Next steps, Open questions.
- Avoid long narrative paragraphs; prefer compact bullets + short sections.
</output_verbosity_spec>
<uncertainty_and_ambiguity>
- If ambiguous/underspecified: ask up to 1–3 precise clarifying questions OR present 2–3 interpretations with labeled assumptions.
- Never fabricate exact figures, IDs, line numbers, or citations.
- Prefer “Based on the provided context…” over absolute claims when uncertain.
</uncertainty_and_ambiguity>
<tool_usage_rules>
- Prefer tools over memory whenever you need fresh/user-specific data.
- Parallelize independent reads when possible.
- After any write/update tool call, restate:
What changed, Where, and validation performed.
</tool_usage_rules>
<scope_discipline>
- Implement EXACTLY and ONLY what the user asked.
- No extra features, no embellishments.
- If something is ambiguous, choose the simplest valid interpretation.
</scope_discipline>
Migration checklist (GPT-5/5.1/4.x → GPT-5.2)
- Make one change at a time: switch model first; keep prompts functionally identical.
- Pin
reasoning_effortto match the old latency/depth profile (don’t rely on defaults). - Run evals; only then tune (usually: verbosity clamp + scope discipline + ambiguity rails).
See references/guide.md for a compact mapping table.
More from ian-pascoe/dotfiles
frontend-design
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics.
20prompt-engineering-patterns
Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.
18writing-agents
Use when creating, editing, or reviewing agent configuration files (AGENT.md or opencode.json agents). Covers frontmatter options, mode selection, tool permissions, and common patterns.
2skill-writer
Guide users through creating Agent Skills for Claude Code. Use when the user wants to create, write, author, or design a new Skill, or needs help with SKILL.md files, frontmatter, or skill structure.
2writing-tools
Use when creating, editing, or reviewing custom OpenCode tools. Covers tool definition structure, argument schemas with Zod, context access, multi-tool files, and invoking scripts in other languages.
2writing-commands
Use when creating, editing, or reviewing custom OpenCode commands. Covers frontmatter options, prompt templating with arguments and shell output, file references, and agent/model configuration.
2