coolify-deploy
Coolify Deployment Skill
Critical Importance
This deployment process is critical. Proper deployment prevents production outages, security vulnerabilities, and user-facing errors. A poorly executed deployment can result in lost revenue, damaged reputation, and emergency firefighting. Every deployment must follow best practices to ensure reliability.
Systematic Approach
Approach this deployment systematically. Deployments require careful planning, thorough verification, and methodical execution. Rushing or skipping checks leads to avoidable incidents. Follow the checklist methodically, verify each step, and ensure all safety measures are in place before proceeding.
The Challenge
The deploy flawlessly every time, but if you can:
- You'll maintain production stability
- Users will experience zero downtime
- Rollbacks will be instant and painless
- The team will trust your deployment process
Mastering Coolify deployment requires balancing automation with manual verification. Can you configure deployments that run automatically while still providing safety nets and quick recovery options?
Project Types
Static Sites (Astro, Svelte, Hugo, Jekyll)
Build Command: [your build tool] build
Output Directory: dist (or public, _site, build — check your framework)
Application Containers (Any Runtime)
Build Command: [install dependencies] && [build]
Start Command: [your runtime] [entry point]
Port: [your app port]
Examples by language:
- Node.js:
npm run build/node dist/index.js - Python:
pip install -r requirements.txt/uvicorn app.main:app - Go:
go build -o app/./app - Rust:
cargo build --release/./target/release/app
Docker-Based Applications
Dockerfile: ./Dockerfile
Port: [your container port]
Deployment Checklist
Before Deploying
- All tests passing locally
- Environment variables configured in Coolify dashboard
- Health check endpoint verified (
/health) - Database migrations reviewed (if applicable)
- Rollback plan documented
During Deployment
- Build succeeds without errors
- Health check passes after deploy
- No error spikes in logs
- Response times within normal range
After Deployment
- Smoke test critical paths
- Monitor error rates for 15 minutes
- Verify database migrations completed
- Update deployment log
Environment Variables
Set these in Coolify dashboard under Environment Variables:
ENVIRONMENT=production
PORT=3000
DATABASE_URL=postgresql://user:pass@host:5432/dbname
# Add your app-specific variables
Health Check Setup
Add a /health endpoint to your application. Examples by language:
Python (Flask):
@app.route('/health')
def health():
return jsonify(status='ok', timestamp=datetime.utcnow().isoformat())
Go:
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
})
Node.js (Express):
app.get('/health', (req, res) => {
res.status(200).json({ status: 'ok' });
});
Configure in Coolify:
- Health Check URL:
/health - Health Check Interval: 30 seconds
Nixpacks Configuration
For automatic build detection, add nixpacks.toml. Coolify auto-detects most runtimes, but you can customize:
[phases.setup]
nixPkgs = ["<your-runtime-package>"]
[phases.install]
cmds = ["<install-command>"]
[phases.build]
cmds = ["<build-command>"]
[start]
cmd = "<start-command>"
Consult the Nixpacks docs for your specific runtime.
Rollback
If deployment fails:
- In Coolify dashboard, go to Deployments
- Find the last working deployment
- Click "Redeploy" on the working version
- Verify health check passes
Or via CLI:
coolify deployments redeploy --applicationUuid "app-uuid" --deploymentUuid "last-good-deployment-uuid"
Deployment Confidence Assessment
After completing each deployment, rate your confidence from 0.0 to 1.0:
- 0.8-1.0: Confident deployment went smoothly, all checks passed, rollback plan tested
- 0.5-0.8: Deployment succeeded but some steps were uncertain or skipped
- 0.2-0.5: Deployment completed with concerns, manual intervention needed
- 0.0-0.2: Deployment failed or completed with significant issues
Document any uncertainty areas or risks identified during the deployment process.
Anti-Rationalization Table
| Excuse | Counter |
|---|---|
| "I'll skip the pre-deploy checklist, it's just a small change" | Small changes break production too. The checklist catches what assumptions miss. |
| "The health check is optional" | Without a health check, you cannot verify the deployment succeeded. |
| "I'll configure environment variables after deploy" | Missing env vars cause startup failures. Configure them before deploying. |
| "Rollback is too complex, I'll fix forward if it breaks" | Fixing forward under pressure introduces more risk than a clean rollback. |
| "I don't need to monitor after deploy" | The first 15 minutes after deploy are when issues surface. Monitor actively. |
More from v1truv1us/ai-eng-system
prompt-refinement
Transform prompts into structured TCRO format with phase-specific clarification. Automatically invoked by /ai-eng/research, /ai-eng/plan, /ai-eng/work, and /ai-eng/specify commands. Use when refining vague prompts, structuring requirements, or enhancing user input quality before execution.
16text-cleanup
Comprehensive patterns and techniques for removing AI-generated verbosity and slop
15plugin-dev
This skill should be used when creating extensions for Claude Code or OpenCode, including plugins, commands, agents, skills, and custom tools. Covers both platforms with format specifications, best practices, and the ai-eng-system build system.
14incentive-prompting
Research-backed prompting techniques for improved AI response quality (+45-115% improvement). Use when optimizing prompts, enhancing agent instructions, or when maximum response quality is critical. Invoked by /ai-eng/optimize command. Includes expert persona, stakes language, step-by-step reasoning, challenge framing, and self-evaluation techniques.
10git-worktree
Manage Git worktrees for parallel development. Use when creating isolated workspaces for parallel feature work, running multiple Claude sessions simultaneously, or managing concurrent development tasks.
9comprehensive-research
Multi-phase research orchestration for thorough codebase, documentation, and external knowledge investigation. Invoked by /ai-eng/research command. Use when conducting deep analysis, exploring codebases, investigating patterns, or synthesizing findings from multiple sources.
9