skills/spiceai/skills/spicepod-config

spicepod-config

SKILL.md

Spicepod Configuration

A Spicepod is the configuration manifest for a Spice application, defining datasets, models, embeddings, runtime settings, and other components.

Basic Structure

version: v1
kind: Spicepod
name: my_app

secrets:
  - from: env
    name: env

datasets:
  - from: <connector>:<path>
    name: <dataset_name>

models:
  - from: <provider>:<model>
    name: <model_name>

embeddings:
  - from: <provider>:<model>
    name: <embedding_name>

runtime:
  # Server, caching, and telemetry settings

Component Sections

Section Purpose Skill Reference
datasets Data sources for SQL queries spice-data-connector
models LLM/ML models for inference spice-models
embeddings Embedding models for vector search spice-embeddings
secrets Secure credential management spice-secrets
catalogs External data catalog connections spice-catalogs
views Virtual tables from SQL queries spice-views
tools LLM function calling capabilities spice-tools
workers Model load balancing and routing spice-workers
runtime Server ports, caching, telemetry (this skill)

Quick Start Example

version: v1
kind: Spicepod
name: quickstart

secrets:
  - from: env
    name: env

datasets:
  - from: postgres:public.users
    name: users
    params:
      pg_host: localhost
      pg_port: 5432
      pg_user: ${ env:PG_USER }
      pg_pass: ${ env:PG_PASS }
    acceleration:
      enabled: true
      engine: duckdb
      refresh_check_interval: 5m

models:
  - from: openai:gpt-4o
    name: assistant
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }
      tools: auto

Runtime Configuration

Configure server ports, caching, and observability under runtime:

runtime:
  http:
    enabled: true
    port: 8090
  flight:
    enabled: true
    port: 50051

Results Caching

In-memory caching for SQL and search results (enabled by default):

runtime:
  caching:
    sql_results:
      enabled: true
      max_size: 128MiB           # cache size limit
      item_ttl: 1s               # time-to-live per entry
      eviction_policy: lru       # lru or tiny_lfu
      encoding: none             # none or zstd (compression)
    search_results:
      enabled: true
      max_size: 128MiB
      item_ttl: 1s
    embeddings:
      enabled: true
      max_size: 128MiB

Stale-While-Revalidate

Serve stale cache entries while refreshing in background:

runtime:
  caching:
    sql_results:
      item_ttl: 10s
      stale_while_revalidate_ttl: 10s  # serve stale for 10s while refreshing

Observability & Telemetry

runtime:
  telemetry:
    enabled: true
    otel_exporter:
      endpoint: 'localhost:4317'   # OpenTelemetry collector
      push_interval: 60s
      metrics:                     # optional: filter specific metrics
        - query_duration_ms
        - query_executions

Prometheus metrics endpoint runs on port 9090 by default:

curl http://localhost:9090/metrics

Dataset with Acceleration

datasets:
  - from: s3://my-bucket/data/
    name: events
    params:
      file_format: parquet
    acceleration:
      enabled: true
      engine: duckdb
      mode: file
      refresh_mode: append
      refresh_check_interval: 1h

See spice-data-connector for connector options, spice-accelerators for acceleration config.

AI-Powered Application

embeddings:
  - from: openai:text-embedding-3-small
    name: embed
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }

datasets:
  - from: postgres:documents
    name: docs
    acceleration:
      enabled: true
    columns:
      - name: content
        embeddings:
          - from: embed
            row_id: id

models:
  - from: openai:gpt-4o
    name: assistant
    params:
      openai_api_key: ${ secrets:OPENAI_API_KEY }
      tools: auto, search

See spice-models for model providers, spice-embeddings for embedding config.

CLI Commands

spice init my_app      # Create new spicepod.yaml
spice run              # Start runtime with current spicepod
spice sql              # Interactive SQL shell
spice chat             # Interactive chat with models
spice status           # Check runtime status

Documentation

Getting Started:

Reference:

Features:

Components:

Weekly Installs
6
Repository
spiceai/skills
Installed on
opencode6
claude-code6
windsurf5
codex5
github-copilot5
antigravity5