spice-text-to-sql
No SKILL.md available for this skill.
View on GitHubMore from spiceai/skills
spice-models
Configure AI/LLM model providers and connections in Spice — OpenAI, Anthropic, Azure, Google, xAI, Bedrock, Perplexity, Databricks, HuggingFace, and local GGUF models. Use this skill whenever the user wants to add a model, configure a specific LLM provider, set up an OpenAI-compatible endpoint (e.g. Groq, Ollama), serve a local model, configure system prompts, set parameter overrides (temperature, response format), or understand which providers are available. This skill is the model connector reference. For AI features like tools, memory, workers, and NSQL, see spice-ai.
16spice-acceleration
Accelerate data locally for sub-second query performance — the feature and its configuration. Use this skill whenever the user asks about data acceleration concepts, enabling acceleration on a dataset, choosing refresh modes (full, append, changes, caching), configuring retention policies, setting up snapshots for cold-start, adding indexes and constraints, or understanding the difference between federated and accelerated queries. This skill covers the "what and why" of acceleration. For choosing which acceleration engine to use (Arrow vs DuckDB vs SQLite vs Cayenne), see spice-accelerators.
10spice-ai
Add AI and LLM capabilities to Spice — tools, NSQL (text-to-SQL), memory, model routing/workers, and evals. Use this skill whenever the user wants to enable LLM tools (SQL, search, memory, MCP, web search), set up text-to-SQL via /v1/nsql, add persistent conversational memory, configure model routing with workers (load balancing, fallback, weighted distribution), set up evals, or use the OpenAI-compatible chat API. This skill covers AI features and orchestration. For configuring individual model providers (OpenAI, Anthropic, etc.), see spice-models.
7spice-search
Search data using vector similarity, full-text keywords, or hybrid methods with Reciprocal Rank Fusion (RRF). Use this skill whenever the user wants to set up semantic search, full-text search, or hybrid search in Spice — including configuring embedding models and providers, enabling full_text_search on columns, writing vector_search/text_search/rrf SQL queries, using the /v1/search HTTP API, configuring vector engines (S3 Vectors), tuning RRF parameters (rank_weight, recency_decay), or setting up chunking for long documents. Also use when the user asks about search relevance, BM25 scoring, or embedding configuration.
7spice-workers
Configure workers for model load balancing and fallback in Spice. Use when asked to "add load balancing", "configure model fallback", "set up worker", or "route between models".
4