ollama-stack
SKILL.md
Ollama Stack
Deploy a local LLM stack for offline and privacy-first workflows.
Minimal Setup
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
ollama pull llama3.1:8b
ollama run llama3.1:8b
Docker Compose Pattern
- Ollama container with persistent model volume
- Open WebUI for chat interface
- Optional LiteLLM proxy for unified API routing
Best Practices
- Pin model versions for reproducibility.
- Monitor VRAM, RAM, and swap utilization.
- Restrict network exposure to trusted subnets.
Related Skills
- mac-mini-llm-lab - Apple Silicon optimization
- docker-compose - Service orchestration
Weekly Installs
9
Repository
bagelhole/devop…t-skillsGitHub Stars
13
First Seen
Feb 21, 2026
Security Audits
Installed on
cline9
github-copilot9
codex9
kimi-cli9
gemini-cli9
cursor9