create-docker-skill
Follow the create-docker-skill skill workflow to scaffold a compliant agent skill
that depends on containerized runtimes (Docker, Nextflow, HPC).
Inputs
$ARGUMENTS— optional skill name or use-case description. Omit to start with discovery.
Steps
- If
$ARGUMENTSprovides a skill name, use it to seed the discovery phase - Follow the create-docker-skill phased workflow: determine container runtime and workflow type, gather environment check requirements, design pre-flight validation and subprocess execution scaffolding, then generate the skill directory
- Report the created skill path and Docker environment setup instructions
Output
Skill directory with SKILL.md containing pre-flight environment checks, subprocess
execution patterns, security-override config, and Docker-aware error handling.
Edge Cases
- If
$ARGUMENTSis empty: begin with discovery — do not assume Docker is available - If Docker is not installed in the target environment: generate graceful degradation
- If the workflow uses HPC or Nextflow instead of Docker: adapt scaffolding accordingly
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26spec-kitty-checklist
A standard Spec-Kitty workflow routine.
26