skills/canonical/skills/generate-agent-skills

generate-agent-skills

Installation
SKILL.md

Agent Skill Architect Workflow

This skill guides you through creating high-quality Agent Skills following a proven 6-step process.


๐Ÿšจ CRITICAL WORKFLOW REQUIREMENTS ๐Ÿšจ

Before you begin, understand these NON-NEGOTIABLE rules:

  1. You MUST run scripts/scaffold_skill.py in Step 3.
    Manual file creation is PROHIBITED. The scaffolding script ensures consistency.

  2. You MUST use the generated templates.
    After scaffolding, templates exist in references/. Use them as your foundation.

  3. You MUST run scripts/validate_skill.py in Step 5.
    Validation catches errors before they propagate.

  4. You MUST follow all 6 steps in order.
    Skipping steps leads to non-compliant or broken skills.

If you bypass scaffolding scripts, you have FAILED this workflow.


Step 1: Understanding the Skill

Before scaffolding, clearly understand how the skill will be used through concrete examples.

For New Skills:

Ask the user clarifying questions to understand:

  • What functionality should the skill support?
  • What are example queries that should trigger this skill?
  • What outputs or actions should result?
  • Are there existing workflows or tools to integrate?

Example questions:

  • "Can you give some examples of how this skill would be used?"
  • "What would a user say that should trigger this skill?"
  • "What existing scripts or documentation should be included?"

For Existing Skills:

If working with an existing skill, analyze:

  • Current SKILL.md structure and content
  • Existing scripts, references, and assets
  • What's working well vs. what needs improvement

Conclude this step when: You have a clear sense of the skill's functionality and triggering scenarios.


Step 2: Planning Reusable Contents

Analyze the concrete examples from Step 1 to identify what reusable resources would help.

โš ๏ธ Critical Decision: Script vs. Checklist

Before planning scripts, ask: "Is this task primarily analysis or computation?"

Analysis tasks (reading, synthesizing, pattern recognition): โ†’ Use checklists or reference docs for LLM to follow โ†’ Examples: Repository analysis, code review, documentation synthesis

Computation tasks (math, APIs, precise transformations): โ†’ Use scripts for deterministic execution โ†’ Examples: Schema validation, API calls, file format conversion

Real example from this session:

  • โŒ Initially planned analyze_repo.py script
  • โœ… Corrected to analysis_checklist.md reference
  • Why: Repository analysis = LLM strength (reading, pattern detection, synthesis)

See references/BEST_PRACTICES.md ยง6 for detailed decision flowchart.


Ask for each example:

  1. How would I execute this task from scratch?
  2. What scripts, references, or assets would make this repeatable?
  3. Is this analysis (LLM) or computation (script)?

Resource Types:

scripts/ - For deterministic operations only:

  • โœ… Math/computation (calculations, aggregations)
  • โœ… External interactions (API calls, database queries)
  • โœ… Precise transformations (file format conversion, schema validation)
  • โœ… Repetitive generation (boilerplate rendering)
  • โŒ Analysis tasks (use checklists instead)
  • โŒ Pattern recognition (LLM excels at this)

references/ - For LLM-driven analysis and knowledge:

  • โœ… Checklists for systematic analysis (e.g., repository discovery)
  • โœ… Pattern libraries (e.g., positive constraint conversions)
  • โœ… API documentation (endpoints, parameters)
  • โœ… Domain knowledge (company policies, industry standards)
  • โœ… Decision trees and workflows

assets/ - For files used in output:

  • โœ… Templates (documents, slides, boilerplate code)
  • โœ… Images (logos, icons, diagrams)
  • โœ… Fonts (typography files)
  • โœ… Seed data (sample datasets, fixtures)

Output: A list of specific files to create with correct categorization (script vs reference)


Step 3: Skill Scaffolding

โš ๏ธ MANDATORY STEP - DO NOT SKIP โš ๏ธ

You MUST execute the scaffolding script. Manual file creation is PROHIBITED.

Command:

python3 scripts/scaffold_skill.py --name <skill-name>

Note: scripts/ is relative to this skill's root directory. Use the full path to the script when running from a different directory.

Options:

  • Default mode: Creates SKILL.md + example files in scripts/, references/, assets/
  • Simple mode: Use --simple flag for minimal structure (SKILL.md only)
  • Category subfolder: Use --category <name> if your repo organises skills by category
  • Explicit location: Use --output-dir <path> to create the skill at a specific path

The script will:

  • โœ… Validate naming conventions (lowercase, hyphens, alphanumeric)
  • โœ… Auto-detect the skills directory (checks .github/skills/, then skills/, then CWD)
  • โœ… Generate SKILL.md with structuring guidance
  • โœ… Create example files to demonstrate resource organization

Note: Naming must match regex: ^[a-z0-9][a-z0-9-]*[a-z0-9]$


โœ… Verification Checkpoint

After running the scaffolding script, confirm these files exist:

ls -la <output-path>/<skill-name>/

Expected output:

  • SKILL.md (with "Structuring This Skill" guidance section)
  • scripts/example.py (placeholder script)
  • references/example_reference.md (placeholder reference)
  • assets/README.md (if using default mode)

๐Ÿ›‘ STOP CONDITIONS:

  • If SKILL.md does NOT exist โ†’ Scaffolding failed, do NOT proceed
  • If you created files manually โ†’ You have violated the workflow, DELETE and re-run script
  • If the script reported errors โ†’ Fix errors before proceeding to Step 4

Step 4: Content Generation

Populate the skill with actual content.

4.1: Implement Reusable Resources First

Start with scripts/, references/, and assets/ identified in Step 2.

For scripts:

  • Replace scripts/example.py with actual implementation
  • Test by running: python3 scripts/<script_name>.py
  • Ensure error messages are descriptive (print to stderr)

For references:

  • Replace references/example_reference.md with actual docs
  • Keep SKILL.md lean - move details here
  • For large files (>100 lines), add Table of Contents

For assets:

  • Add actual template files, images, fonts
  • Replace or delete assets/README.md
  • Use descriptive filenames

Important: Delete any example files you don't need!

4.2: Write SKILL.md Content

Follow the structuring guidance embedded in the generated SKILL.md template.

Choose your structure pattern:

  • Workflow-Based: Sequential processes (see references/workflows.md)
  • Task-Based: Tool collections with different operations
  • Reference/Guidelines: Standards, specifications, coding rules
  • Capabilities-Based: Integrated systems with multiple features

Key elements:

  1. Frontmatter (YAML):

    • name: Must match directory name exactly
    • description: High-entropy, keyword-rich, 3rd person
      • Include WHEN to use this skill (triggers)
      • Include WHAT the skill does (capabilities)
      • Example: "Processes PDF documents for form filling, text extraction, and merging. Use when working with PDF files or when user requests document manipulation tasks."
  2. Body (Markdown):

    • Use imperative/infinitive form ("Run the script", not "You should run")
    • Reference scripts/references explicitly by path
    • Consult references/BEST_PRACTICES.md for the "Freedom Scale"
    • Consult references/output-patterns.md for output formatting

Delete the "Structuring This Skill" section when done - it's guidance only!

4.3: Design Patterns

For multi-step processes: See references/workflows.md

  • Sequential workflows (step 1 โ†’ step 2 โ†’ step 3)
  • Conditional workflows (if/then branching)
  • Iterative workflows (refinement loops)

For consistent outputs: See references/output-patterns.md

  • Strict templates (non-negotiable formats)
  • Flexible guidance (adaptable structure)
  • Examples-based (show don't tell)
  • Validation checklists (quality requirements)

Step 5: Validation

โš ๏ธ MANDATORY STEP - DO NOT SKIP โš ๏ธ

Run the validation script to ensure specification compliance.

Command:

python3 scripts/validate_skill.py --path <path-to-new-skill>

Note: scripts/ is relative to this skill's root directory. Use the full path to the script when running from a different directory.

Example:

python3 scripts/validate_skill.py --path .github/skills/my-new-skill

What it checks:

  • โœ… Directory naming regex (^[a-z0-9][a-z0-9-]*[a-z0-9]$)
  • โœ… SKILL.md exists
  • โœ… YAML frontmatter has required fields (name, description, version, author, license)
  • โœ… name in YAML matches directory name
  • โœ… version follows SemVer format (X.Y.Z)
  • โœ… description contains a WHEN: trigger clause
  • โš ๏ธ Advisory: tags field present (recommended)
  • โš ๏ธ Advisory: presence of references/ and scripts/ directories

If validation fails:

  • Read the error output carefully
  • Fix critical violations immediately
  • Warnings are informational (acceptable for simple skills)

When valid: Proceed to testing!


โœ… Post-Validation Checklist

Before proceeding to Step 6, confirm:

Workflow Compliance:

  • I RAN scripts/scaffold_skill.py (Step 3)
  • I USED the generated templates from scaffolding
  • I CONSULTED references/TEMPLATES.md and references/BEST_PRACTICES.md (Step 4)
  • I RAN scripts/validate_skill.py (Step 5)
  • Validation script reported SUCCESS (no critical errors)

Content Quality:

  • YAML frontmatter includes all required fields: name, description, license (top-level); metadata.author, metadata.version (under metadata:)
  • metadata.summary is present and โ‰ค 160 chars โ€” plain language, no WHEN: clause (recommended)
  • metadata.tags field is present (recommended)
  • metadata.author is set to your organization name; license to the project's SPDX identifier
  • Description is high-entropy and keyword-rich with a WHEN: clause (8+ trigger phrases)
  • No "Structuring This Skill" guidance section remains in SKILL.md
  • Example files (example.py, example_reference.md) are deleted or replaced
  • Scripts are in scripts/, references in references/, templates in assets/

๐Ÿ›‘ STOP CONDITION: If you did NOT run the scaffolding script or manually created files, STOP and re-do from Step 3.


Step 6: Testing and Iteration

After creating the skill, test and refine based on real usage.

Testing Workflow:

  1. Test with real examples from Step 1

    • Does the skill trigger on expected queries?
    • Do scripts execute without errors?
    • Is output quality acceptable?
  2. Identify friction points:

    • Are instructions clear enough?
    • Are there missing scripts or references?
    • Is context loaded efficiently?
  3. Iterate on improvements:

    • Update SKILL.md for clarity
    • Add missing examples or edge cases
    • Optimize script error handling
    • Split large references if needed (progressive disclosure)
  4. Re-validate after changes

Common Iteration Patterns:

Problem: Skill isn't triggering when expected Solution: Enhance description with more keywords and trigger scenarios

Problem: Agent struggles with workflow steps Solution: Add decision tree or flowchart; consult references/workflows.md

Problem: Context feels bloated Solution: Move content from SKILL.md to references/; add grep hints

Problem: Scripts fail in edge cases Solution: Add error handling; print descriptive messages to stderr

Problem: Output quality inconsistent Solution: Add templates or validation checklist; see references/output-patterns.md

When to Stop Iterating:

โœ… Skill triggers reliably on target queries
โœ… Workflows execute without confusion
โœ… Output quality meets requirements
โœ… No critical errors in testing


Knowledge Retrieval

If questions arise during skill creation:

Specification questions (naming, structure, required files): โ†’ Read references/SPECIFICATION.md

Best practices (context economy, freedom scale, anti-patterns): โ†’ Read references/BEST_PRACTICES.md

Templates and examples (frontmatter, structure patterns): โ†’ Read references/TEMPLATES.md

Workflow design (sequential, conditional, iterative): โ†’ Read references/workflows.md

Output formatting (templates, examples, validation): โ†’ Read references/output-patterns.md

Do not hallucinate answers. Always consult the authoritative sources.

Weekly Installs
11
GitHub Stars
10
First Seen
5 days ago