ollama-stack

SKILL.md

Ollama Stack

Deploy a local LLM stack for offline and privacy-first workflows.

Minimal Setup

curl -fsSL https://ollama.com/install.sh | sh
ollama serve
ollama pull llama3.1:8b
ollama run llama3.1:8b

Docker Compose Pattern

  • Ollama container with persistent model volume
  • Open WebUI for chat interface
  • Optional LiteLLM proxy for unified API routing

Best Practices

  • Pin model versions for reproducibility.
  • Monitor VRAM, RAM, and swap utilization.
  • Restrict network exposure to trusted subnets.

Related Skills

Weekly Installs
9
GitHub Stars
13
First Seen
Feb 21, 2026
Installed on
cline9
github-copilot9
codex9
kimi-cli9
gemini-cli9
cursor9