generate-agent-skills
Agent Skill Architect Workflow
This skill guides you through creating high-quality Agent Skills following a proven 6-step process.
๐จ CRITICAL WORKFLOW REQUIREMENTS ๐จ
Before you begin, understand these NON-NEGOTIABLE rules:
-
You MUST run
scripts/scaffold_skill.pyin Step 3.
Manual file creation is PROHIBITED. The scaffolding script ensures consistency. -
You MUST use the generated templates.
After scaffolding, templates exist inreferences/. Use them as your foundation. -
You MUST run
scripts/validate_skill.pyin Step 5.
Validation catches errors before they propagate. -
You MUST follow all 6 steps in order.
Skipping steps leads to non-compliant or broken skills.
If you bypass scaffolding scripts, you have FAILED this workflow.
Step 1: Understanding the Skill
Before scaffolding, clearly understand how the skill will be used through concrete examples.
For New Skills:
Ask the user clarifying questions to understand:
- What functionality should the skill support?
- What are example queries that should trigger this skill?
- What outputs or actions should result?
- Are there existing workflows or tools to integrate?
Example questions:
- "Can you give some examples of how this skill would be used?"
- "What would a user say that should trigger this skill?"
- "What existing scripts or documentation should be included?"
For Existing Skills:
If working with an existing skill, analyze:
- Current SKILL.md structure and content
- Existing scripts, references, and assets
- What's working well vs. what needs improvement
Conclude this step when: You have a clear sense of the skill's functionality and triggering scenarios.
Step 2: Planning Reusable Contents
Analyze the concrete examples from Step 1 to identify what reusable resources would help.
โ ๏ธ Critical Decision: Script vs. Checklist
Before planning scripts, ask: "Is this task primarily analysis or computation?"
Analysis tasks (reading, synthesizing, pattern recognition): โ Use checklists or reference docs for LLM to follow โ Examples: Repository analysis, code review, documentation synthesis
Computation tasks (math, APIs, precise transformations): โ Use scripts for deterministic execution โ Examples: Schema validation, API calls, file format conversion
Real example from this session:
- โ Initially planned
analyze_repo.pyscript - โ
Corrected to
analysis_checklist.mdreference - Why: Repository analysis = LLM strength (reading, pattern detection, synthesis)
See references/BEST_PRACTICES.md ยง6 for detailed decision flowchart.
Ask for each example:
- How would I execute this task from scratch?
- What scripts, references, or assets would make this repeatable?
- Is this analysis (LLM) or computation (script)?
Resource Types:
scripts/ - For deterministic operations only:
- โ Math/computation (calculations, aggregations)
- โ External interactions (API calls, database queries)
- โ Precise transformations (file format conversion, schema validation)
- โ Repetitive generation (boilerplate rendering)
- โ Analysis tasks (use checklists instead)
- โ Pattern recognition (LLM excels at this)
references/ - For LLM-driven analysis and knowledge:
- โ Checklists for systematic analysis (e.g., repository discovery)
- โ Pattern libraries (e.g., positive constraint conversions)
- โ API documentation (endpoints, parameters)
- โ Domain knowledge (company policies, industry standards)
- โ Decision trees and workflows
assets/ - For files used in output:
- โ Templates (documents, slides, boilerplate code)
- โ Images (logos, icons, diagrams)
- โ Fonts (typography files)
- โ Seed data (sample datasets, fixtures)
Output: A list of specific files to create with correct categorization (script vs reference)
Step 3: Skill Scaffolding
โ ๏ธ MANDATORY STEP - DO NOT SKIP โ ๏ธ
You MUST execute the scaffolding script. Manual file creation is PROHIBITED.
Command:
python3 scripts/scaffold_skill.py --name <skill-name>
Note:
scripts/is relative to this skill's root directory. Use the full path to the script when running from a different directory.
Options:
- Default mode: Creates SKILL.md + example files in scripts/, references/, assets/
- Simple mode: Use
--simpleflag for minimal structure (SKILL.md only) - Category subfolder: Use
--category <name>if your repo organises skills by category - Explicit location: Use
--output-dir <path>to create the skill at a specific path
The script will:
- โ Validate naming conventions (lowercase, hyphens, alphanumeric)
- โ
Auto-detect the skills directory (checks
.github/skills/, thenskills/, then CWD) - โ Generate SKILL.md with structuring guidance
- โ Create example files to demonstrate resource organization
Note: Naming must match regex: ^[a-z0-9][a-z0-9-]*[a-z0-9]$
โ Verification Checkpoint
After running the scaffolding script, confirm these files exist:
ls -la <output-path>/<skill-name>/
Expected output:
SKILL.md(with "Structuring This Skill" guidance section)scripts/example.py(placeholder script)references/example_reference.md(placeholder reference)assets/README.md(if using default mode)
๐ STOP CONDITIONS:
- If
SKILL.mddoes NOT exist โ Scaffolding failed, do NOT proceed - If you created files manually โ You have violated the workflow, DELETE and re-run script
- If the script reported errors โ Fix errors before proceeding to Step 4
Step 4: Content Generation
Populate the skill with actual content.
4.1: Implement Reusable Resources First
Start with scripts/, references/, and assets/ identified in Step 2.
For scripts:
- Replace
scripts/example.pywith actual implementation - Test by running:
python3 scripts/<script_name>.py - Ensure error messages are descriptive (print to stderr)
For references:
- Replace
references/example_reference.mdwith actual docs - Keep SKILL.md lean - move details here
- For large files (>100 lines), add Table of Contents
For assets:
- Add actual template files, images, fonts
- Replace or delete
assets/README.md - Use descriptive filenames
Important: Delete any example files you don't need!
4.2: Write SKILL.md Content
Follow the structuring guidance embedded in the generated SKILL.md template.
Choose your structure pattern:
- Workflow-Based: Sequential processes (see
references/workflows.md) - Task-Based: Tool collections with different operations
- Reference/Guidelines: Standards, specifications, coding rules
- Capabilities-Based: Integrated systems with multiple features
Key elements:
-
Frontmatter (YAML):
name: Must match directory name exactlydescription: High-entropy, keyword-rich, 3rd person- Include WHEN to use this skill (triggers)
- Include WHAT the skill does (capabilities)
- Example: "Processes PDF documents for form filling, text extraction, and merging. Use when working with PDF files or when user requests document manipulation tasks."
-
Body (Markdown):
- Use imperative/infinitive form ("Run the script", not "You should run")
- Reference scripts/references explicitly by path
- Consult
references/BEST_PRACTICES.mdfor the "Freedom Scale" - Consult
references/output-patterns.mdfor output formatting
Delete the "Structuring This Skill" section when done - it's guidance only!
4.3: Design Patterns
For multi-step processes: See references/workflows.md
- Sequential workflows (step 1 โ step 2 โ step 3)
- Conditional workflows (if/then branching)
- Iterative workflows (refinement loops)
For consistent outputs: See references/output-patterns.md
- Strict templates (non-negotiable formats)
- Flexible guidance (adaptable structure)
- Examples-based (show don't tell)
- Validation checklists (quality requirements)
Step 5: Validation
โ ๏ธ MANDATORY STEP - DO NOT SKIP โ ๏ธ
Run the validation script to ensure specification compliance.
Command:
python3 scripts/validate_skill.py --path <path-to-new-skill>
Note:
scripts/is relative to this skill's root directory. Use the full path to the script when running from a different directory.
Example:
python3 scripts/validate_skill.py --path .github/skills/my-new-skill
What it checks:
- โ
Directory naming regex (
^[a-z0-9][a-z0-9-]*[a-z0-9]$) - โ SKILL.md exists
- โ
YAML frontmatter has required fields (
name,description,version,author,license) - โ
namein YAML matches directory name - โ
versionfollows SemVer format (X.Y.Z) - โ
descriptioncontains aWHEN:trigger clause - โ ๏ธ Advisory:
tagsfield present (recommended) - โ ๏ธ Advisory: presence of
references/andscripts/directories
If validation fails:
- Read the error output carefully
- Fix critical violations immediately
- Warnings are informational (acceptable for simple skills)
When valid: Proceed to testing!
โ Post-Validation Checklist
Before proceeding to Step 6, confirm:
Workflow Compliance:
- I RAN
scripts/scaffold_skill.py(Step 3) - I USED the generated templates from scaffolding
- I CONSULTED
references/TEMPLATES.mdandreferences/BEST_PRACTICES.md(Step 4) - I RAN
scripts/validate_skill.py(Step 5) - Validation script reported SUCCESS (no critical errors)
Content Quality:
- YAML frontmatter includes all required fields:
name,description,license(top-level);metadata.author,metadata.version(undermetadata:) -
metadata.summaryis present and โค 160 chars โ plain language, noWHEN:clause (recommended) -
metadata.tagsfield is present (recommended) -
metadata.authoris set to your organization name;licenseto the project's SPDX identifier - Description is high-entropy and keyword-rich with a
WHEN:clause (8+ trigger phrases) - No "Structuring This Skill" guidance section remains in SKILL.md
- Example files (
example.py,example_reference.md) are deleted or replaced - Scripts are in
scripts/, references inreferences/, templates inassets/
๐ STOP CONDITION: If you did NOT run the scaffolding script or manually created files, STOP and re-do from Step 3.
Step 6: Testing and Iteration
After creating the skill, test and refine based on real usage.
Testing Workflow:
-
Test with real examples from Step 1
- Does the skill trigger on expected queries?
- Do scripts execute without errors?
- Is output quality acceptable?
-
Identify friction points:
- Are instructions clear enough?
- Are there missing scripts or references?
- Is context loaded efficiently?
-
Iterate on improvements:
- Update SKILL.md for clarity
- Add missing examples or edge cases
- Optimize script error handling
- Split large references if needed (progressive disclosure)
-
Re-validate after changes
Common Iteration Patterns:
Problem: Skill isn't triggering when expected Solution: Enhance description with more keywords and trigger scenarios
Problem: Agent struggles with workflow steps
Solution: Add decision tree or flowchart; consult references/workflows.md
Problem: Context feels bloated Solution: Move content from SKILL.md to references/; add grep hints
Problem: Scripts fail in edge cases Solution: Add error handling; print descriptive messages to stderr
Problem: Output quality inconsistent
Solution: Add templates or validation checklist; see references/output-patterns.md
When to Stop Iterating:
โ
Skill triggers reliably on target queries
โ
Workflows execute without confusion
โ
Output quality meets requirements
โ
No critical errors in testing
Knowledge Retrieval
If questions arise during skill creation:
Specification questions (naming, structure, required files):
โ Read references/SPECIFICATION.md
Best practices (context economy, freedom scale, anti-patterns):
โ Read references/BEST_PRACTICES.md
Templates and examples (frontmatter, structure patterns):
โ Read references/TEMPLATES.md
Workflow design (sequential, conditional, iterative):
โ Read references/workflows.md
Output formatting (templates, examples, validation):
โ Read references/output-patterns.md
Do not hallucinate answers. Always consult the authoritative sources.