dockerize-and-deploy
Dockerize and Deploy
Containerize a repo and produce a production-ready deployment setup. Work in phases — never write everything at once.
Quick Start
- Audit the repo (language, services, DB, existing Docker files).
- Confirm phases with the user.
- Execute one phase at a time, verifying after each.
Workflow
1. Audit the repo
Read the codebase to identify:
- Runtime: Node.js, Python, Go, Java, etc. and version
- Services: web server, background workers, scheduled jobs
- Datastores: PostgreSQL, MySQL, Redis, MongoDB, S3-compatible storage
- Build step: bundler, compiler, static assets
- Existing Docker files:
Dockerfile,docker-compose.yml,.dockerignore - Secrets/env vars:
.env.example, config files, hardcoded values
Summarize findings and propose phases before writing anything.
2. Confirm phases
Standard phases — adjust based on what the repo needs:
- Dockerfile — multi-stage build for the app
- docker-compose — local dev stack with all services
- Production compose —
docker-compose.prod.ymlwith volumes, restart policies, resource limits - Pre-flight script —
scripts/preflight.shvalidates environment before deploy - Deploy script —
scripts/deploy.shorchestrates the full deployment
Present to the user as a numbered list. Merge or skip phases if the repo is simple.
3. Write the Dockerfile (Phase 1)
Use a multi-stage build:
- Stage 1 (builder): install deps, compile/bundle
- Stage 2 (runtime): copy only built artifacts, run as non-root user
Pin the base image to a specific minor version (e.g. node:20.11-alpine). Add a .dockerignore that excludes node_modules, .env, .git, build artifacts.
4. Write docker-compose files (Phases 2–3)
Dev compose: mounts source for hot reload, exposes debug ports, uses named volumes for DB data.
Prod compose: no source mounts, restart: unless-stopped, healthchecks on every service, explicit volume declarations, resource limits (mem_limit, cpus). See REFERENCE.md for volume and healthcheck patterns.
5. Write pre-flight script (Phase 4)
Copy scripts/preflight.sh from this skill's scripts/ directory into the project. It validates:
- Required env vars are set and non-empty
- Docker and docker-compose are installed and reachable
- No port conflicts on required ports
- DB connection string is reachable (optional ping)
- Image builds successfully (dry run)
Run it: bash scripts/preflight.sh — exits non-zero on any failure.
6. Write deploy script (Phase 5)
scripts/deploy.sh flow:
- Run
preflight.sh— abort if it fails - Pull latest images / build new image
- Run DB migrations (if applicable)
- Rolling restart: bring up new containers before stopping old ones
- Health-check the running stack
- Print service URLs and status
7. Verify after each phase
docker build -t app:test . # Dockerfile compiles
docker compose config # compose files are valid YAML
docker compose up -d && docker compose ps # services start healthy
bash scripts/preflight.sh # pre-flight passes
Guardrails
- Never embed secrets in Dockerfiles or compose files — use env files or secrets mounts.
- Always run containers as a non-root user.
- Do not use
latestimage tags in production — pin versions. - Volumes for DB data must use named volumes, never bind mounts to host paths.
- If the repo has no
.env.example, create one before writing any Docker config.
References
- REFERENCE.md — volume patterns, healthcheck templates, resource limits, multi-stage examples by runtime, rolling deploy strategies.
More from rockclaver/systemcraft
code-graph
Builds and maintains a `.claude/codegraph.md` index of a codebase — a structured map of every module with purpose, key exports, and dependencies — so the agent can navigate any repo by reading one file instead of scanning dozens. Use when starting work on an unfamiliar codebase, when asked to index a repo, when context costs are high from repeated scans, or at the start of any task that will touch multiple files.
14find-code
Locate files and code using grep and shell scripts — never by AI scanning. Returns exact file paths and line numbers so the agent can jump directly to the location. Use whenever the agent needs to find a function, class, variable, import, file, or any pattern in the codebase. Code and file discovery must always be a tool call, never an AI guess.
14grill-me
Interview the user relentlessly about a plan or design until reaching shared understanding, resolving each branch of the decision tree. Use when user wants to stress-test a plan, get grilled on their design, or mentions "grill me".
13prd-to-plan
Turn a PRD into a multi-phase implementation plan using tracer-bullet vertical slices, saved as a local Markdown file in ./plans/. Use when user wants to break down a PRD, create an implementation plan, plan phases from a PRD, or mentions "tracer bullets".
13write-a-prd
Create a PRD through user interview, codebase exploration, and module design, then submit as a GitHub issue. Use when user wants to write a PRD, create a product requirements document, or plan a new feature.
13design-api
Design and implement consistent, DRY REST API endpoints for database models — handlers, routing, validation, error responses, and shared utilities — then generate test coverage for every endpoint. Use when the user asks to write an API, add endpoints for a model, build a REST layer, or create CRUD routes.
13