skill-creator
Skill Creator
Language
Match user's language: Respond in the same language the user uses.
Overview
Create new agent skills by guiding the user through a series of choices, then generating a ready-to-edit project structure with best practices baked in.
How It Works
- Collect requirements through dialogue (AskUserQuestion)
- Call
scaffold.pywith the collected parameters (non-interactive) - Report what was generated and guide next steps
Dialogue Flow
Progress:
- Step 1: Skill name
- Step 2: Skill level
- Step 3: Environment strategy (L1 only)
- Step 4: Output directory
- Generate and report
Follow these steps in order. Use AskUserQuestion for steps 1–4.
Step 1: Skill Name
Ask the user for a skill name. Validate it meets these rules:
- Lowercase letters, digits, and hyphens only
- Must start with a letter, end with a letter or digit
- No consecutive hyphens
- Maximum 64 characters
If invalid, explain the constraint and ask again.
Step 2: Skill Level
Ask the user to choose a skill level:
L0 — Pure Prompt: Only a SKILL.md file. All capabilities come from Claude's built-in tools, MCP servers, or general knowledge. Best for workflow guides, domain knowledge, configuration wizards. No scripts needed.
L0+ — Prompt + Helper Scripts: SKILL.md plus lightweight helper scripts for environment detection, status caching, or other auxiliary tasks. Core logic stays in the prompt. Best when Claude needs a preflight check or a small utility but handles business logic itself.
L1 — Prompt + Business Scripts: SKILL.md orchestrates CLI scripts that handle core business logic. Scripts accept parameters, return structured JSON, and follow MCP tool design principles. Best for skills that interact with APIs, process data, or perform operations that benefit from deterministic code.
Step 3: Environment Strategy (L1 only)
If the user chose L1, ask which environment strategy to use:
stdlib — Python standard library only. Zero dependencies, zero environment issues. Choose this when urllib, json, argparse, pathlib are sufficient. This is the recommended default.
uv — Dependencies declared inline via PEP 723, executed with uv run. No persistent venv, global cache, version-isolated. Choose this when external packages are needed but a full venv is overkill.
venv — Traditional per-skill virtual environment with run.sh wrapper. Choose this only when dependencies require C extensions, or the skill runs long-lived processes.
Step 4: Output Directory
Ask where to generate the skill. Default: current working directory. The script creates skills/<name>/ under this directory.
Generate
After collecting all parameters, run:
python3 {SKILL_DIR}/scripts/scaffold.py scaffold \
--name <name> \
--level <level> \
[--env <strategy>] \
--output <dir>
Where {SKILL_DIR} is the directory containing this SKILL.md file. Resolve it at runtime.
The script outputs JSON to stdout:
{
"status": "ok",
"level": "l1",
"env": "uv",
"created": ["skills/my-skill/SKILL.md", "skills/my-skill/scripts/main.py", ...],
"hint": "..."
}
If it fails, stderr contains JSON with error, hint, and recoverable fields.
Completion Report
After successful generation, present:
[Skill Creator] Complete!
Skill: <name> (Level: <level>[, Env: <env>])
Output: <directory>
Files created:
• <list from JSON "created" field>
Next Steps:
→ Edit SKILL.md — replace TODO markers, write description with trigger phrases
→ Customize scripts/ (L0+/L1)
→ Test preflight (L0+/L1)
→ Publish with skill-publish when ready
Then provide detailed guidance:
-
Edit SKILL.md — Replace all placeholder markers. The
descriptionfield in frontmatter is critical — it determines when Claude activates the skill. Be specific and include trigger phrases. -
Customize scripts/ (L0+/L1) — The generated scripts are functional frameworks with placeholder markers. Add your business logic.
-
Test preflight (L0+/L1) — Run the preflight command to verify the JSON output structure works:
- L0+:
bash scripts/helper.sh preflight - L1 stdlib:
python3 scripts/main.py preflight - L1 uv:
uv run scripts/main.py preflight - L1 venv:
bash scripts/run.sh preflight(after setup)
- L0+:
-
Add references/ — Put detailed reference documents here and reference them from SKILL.md with file read instructions. Keep SKILL.md lean.
-
Ready to publish? — If the
skill-publishskill is installed, use it to wrap this into a complete GitHub repo with README, LICENSE, plugin.json, and marketplace.json. If not installed:npx skills add psylch/better-skills@skill-publish -g -y
References
For skill design conventions — output formats, error handling, environment strategies, preflight conventions — read references/best_practices.md.
For common quality issues and how to avoid them, read references/improvement_patterns.md.
For what automated validation checks will be run (by skill-review), read references/validation_rules.md.