skills/spiceai/skills/spice-models

spice-models

SKILL.md

Spice Model Providers

Model providers enable LLM chat completions and ML inference through a unified OpenAI-compatible API.

Basic Configuration

models:
  - from: <provider>:<model_id>
    name: <model_name>
    params:
      <provider>_api_key: ${ secrets:API_KEY }
      tools: auto                    # optional: enable runtime tools
      system_prompt: 'You are...'    # optional: default system prompt

Provider Prefixes

Provider From Format Example
openai openai:<model_id> openai:gpt-4o
anthropic anthropic:<model_id> anthropic:claude-sonnet-4-5
azure azure:<deployment> azure:my-gpt4-deployment
bedrock bedrock:<model_id> bedrock:anthropic.claude-3
google google:<model_id> google:gemini-pro
xai xai:<model_id> xai:grok-beta
databricks databricks:<endpoint> databricks:llama-3-70b
spiceai spiceai:<model> spiceai:llama3
hf hf:<repo_id> hf:meta-llama/Llama-3-8B
file file:<path> file:./models/llama.gguf

Common Parameters

Parameter Description
tools Runtime tools: auto, sql, search, memory
system_prompt Default system prompt for all requests
endpoint Override API endpoint (for compatible providers)

Examples

OpenAI Model

models:
  - from: openai:gpt-4o
    name: gpt4
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }
      tools: auto

Model with Memory

datasets:
  - from: memory:store
    name: llm_memory
    access: read_write

models:
  - from: openai:gpt-4o
    name: assistant
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }
      tools: memory, sql

Local Model (GGUF)

models:
  - from: file:./models/llama-3.gguf
    name: local_llama

Using Models

Query via OpenAI-compatible API:

curl http://localhost:8090/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt4", "messages": [{"role": "user", "content": "Hello"}]}'

Documentation

Weekly Installs
4
Repository
spiceai/skills
Installed on
opencode4
claude-code4
windsurf3
codex3
github-copilot3
antigravity3