create-plugin
Follow the create-plugin skill workflow to scaffold a new Claude Code plugin.
Inputs
$ARGUMENTS— optional plugin name in kebab-case. Omit to start with discovery.
Steps
- If
$ARGUMENTSprovides a plugin name, use it to seed Phase 1 - Follow the create-plugin phased workflow: discover purpose and plugin type,
plan component table (skills / commands / agents / hooks / MCP), ask clarifying
questions per component, scaffold directory structure and
plugin.json, implement each component using the appropriate sub-skill, validate, test, and document - Report the created plugin directory and verification checklist results
Output
Plugin directory with .claude-plugin/plugin.json, component directories, README.md,
and a .claude/settings.json stub for reliable local discovery.
Edge Cases
- If
$ARGUMENTSis empty: begin with Phase 1 discovery — do not pre-fill plugin name - If similar plugin already exists: reference it as a starting point
- If MCP integrations are needed: invoke
create-mcp-integrationfor each one - After scaffolding: run
/agent-scaffolders:audit-pluginto validate structure
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26