spice-secrets
Spice Secret Stores
Secret stores manage sensitive data like API keys, passwords, and tokens. The env store is loaded by default.
Basic Configuration
secrets:
- from: <store_type>
name: <store_name>
Supported Secret Stores
| Store | From Format | Description |
|---|---|---|
| Environment | env |
Environment variables + .env / .env.local files (default) |
| Kubernetes | kubernetes:<secret_name> |
Kubernetes secrets |
| AWS Secrets Manager | aws_secrets_manager |
AWS Secrets Manager |
| Keyring | keyring |
OS keyring (macOS Keychain, Linux, Windows) |
Default: Environment Variables
Loaded automatically. Reads from environment variables and any .env.local or .env files in the project directory.
secrets:
- from: env
name: env
Referencing Secrets
Use ${ store_name:KEY_NAME } syntax in component parameters:
datasets:
- from: postgres:my_table
name: my_table
params:
pg_user: ${ env:PG_USER }
pg_pass: ${ env:PG_PASSWORD }
models:
- from: openai:gpt-4o
name: gpt4
params:
openai_api_key: ${ secrets:OPENAI_API_KEY }
Also works within strings:
params:
mysql_connection_string: mysql://${env:USER}:${env:PASSWORD}@localhost:3306/db
Searching All Stores
Use ${ secrets:KEY } to search all configured stores in precedence order (last defined wins):
secrets:
- from: env
name: env
- from: keyring
name: keyring
datasets:
- from: postgres:my_table
name: my_table
params:
pg_user: ${ secrets:pg_user } # checks keyring first, then env
pg_pass: ${ secrets:pg_pass }
The <key_name> is automatically uppercased for the env secret store.
Examples
Kubernetes Secrets
secrets:
- from: kubernetes:my-app-secrets
name: k8s
AWS Secrets Manager
secrets:
- from: aws_secrets_manager
name: aws
params:
aws_region: us-east-1
Override Order (env overrides keyring)
secrets:
- from: keyring
name: keyring
- from: env
name: env
Documentation
More from spiceai/skills
spice-data-connector
Configure individual data source connectors in Spice — PostgreSQL, MySQL, S3, Databricks, Snowflake, DuckDB, GitHub, Kafka, and 25+ more. Use this skill whenever the user wants to add a dataset, connect to a specific database or data source, load data from S3 or files, configure connector-specific parameters, understand file formats (Parquet, CSV, PDF, DOCX), or set up hive partitioning. This skill is the reference for the `from:` and `params:` fields in dataset configuration. For cross-source federation, views, and catalogs, see spice-connect-data.
22spice-models
Configure AI/LLM model providers and connections in Spice — OpenAI, Anthropic, Azure, Google, xAI, Bedrock, Perplexity, Databricks, HuggingFace, and local GGUF models. Use this skill whenever the user wants to add a model, configure a specific LLM provider, set up an OpenAI-compatible endpoint (e.g. Groq, Ollama), serve a local model, configure system prompts, set parameter overrides (temperature, response format), or understand which providers are available. This skill is the model connector reference. For AI features like tools, memory, workers, and NSQL, see spice-ai.
16spicepod-config
Create and configure Spicepod manifests (spicepod.yaml) — the central configuration file for Spice applications. Use this skill whenever the user wants to create a new spicepod.yaml from scratch, understand the overall spicepod structure and available sections, configure runtime settings (ports, caching, telemetry/observability), set up a complete Spice application combining datasets + models + search, or understand deployment models and use cases. This is the "glue" skill that shows how all Spice components fit together in one manifest. For details on specific sections (datasets, models, search, etc.), see the dedicated skills.
16spice-accelerators
Choose and configure the right acceleration engine — Arrow, DuckDB, SQLite, Cayenne, PostgreSQL, or Turso. Use this skill whenever the user needs to pick an accelerator engine, compare engines (e.g. "should I use DuckDB or Cayenne?"), configure engine-specific parameters (duckdb_file, sqlite_file), tune memory vs file mode, or understand engine capabilities and limitations. This skill is the engine selection and tuning guide. For the broader acceleration feature (refresh modes, retention, snapshots, indexes), see spice-acceleration.
15spice-acceleration
Accelerate data locally for sub-second query performance — the feature and its configuration. Use this skill whenever the user asks about data acceleration concepts, enabling acceleration on a dataset, choosing refresh modes (full, append, changes, caching), configuring retention policies, setting up snapshots for cold-start, adding indexes and constraints, or understanding the difference between federated and accelerated queries. This skill covers the "what and why" of acceleration. For choosing which acceleration engine to use (Arrow vs DuckDB vs SQLite vs Cayenne), see spice-accelerators.
10spice-setup
Get started with Spice.ai — install the runtime, initialize a project, run the runtime, and use the CLI. Use this skill whenever the user mentions installing Spice, setting up a new Spice project, running `spice run`, looking up CLI commands or API endpoints, deployment models, or getting started with Spice. Also use when the user asks "how do I install Spice", "how do I start Spice", "what CLI commands does Spice have", or any question about Spice runtime setup and configuration basics.
9