turso-db
Turso DB
Turso is an in-process relational database engine aiming for full SQLite compatibility. Unlike client-server databases, it runs in your application's memory space with sub-microsecond read/write latencies.
Installation
# Via installer script
curl --proto '=https' --tlsv1.2 -LsSf \
https://github.com/tursodatabase/turso/releases/latest/download/turso_cli-installer.sh | sh
# Via Homebrew
brew install turso
Quick Start
$ tursodb # transient in-memory database
$ tursodb mydata.db # persistent database file
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT);
INSERT INTO users VALUES (1, 'Alice', 'alice@example.com');
SELECT * FROM users;
Core SQL
Standard SQLite-compatible SQL. See references/sql-reference.md for full statement syntax (CREATE/ALTER/DROP TABLE, INSERT, SELECT, UPDATE, DELETE, transactions).
Key differences from SQLite:
- No multi-process/multi-threading access
- No savepoints, triggers, or vacuum
- Views experimental (behind
--experimental-viewsflag) - UTF-8 only
JavaScript API
Install and use from Node.js:
npm i @tursodatabase/database # native
npm i @tursodatabase/database --cpu wasm32 # WASM
import { connect } from '@tursodatabase/database';
const db = await connect('turso.db');
const row = db.prepare('SELECT 1').get();
See references/javascript-api.md for details.
Vector Search
Store embeddings and perform similarity search using built-in vector functions.
CREATE TABLE docs (id INTEGER PRIMARY KEY, content TEXT, embedding BLOB);
INSERT INTO docs VALUES (1, 'ML basics', vector32('[0.2, 0.5, 0.1]'));
SELECT content, vector_distance_cos(embedding, vector32('[0.3, 0.4, 0.2]')) AS dist
FROM docs ORDER BY dist LIMIT 5;
Supports: vector32, vector64, vector32_sparse, vector8, vector1bit.
Distance functions: vector_distance_cos, vector_distance_l2, vector_distance_dot, vector_distance_jaccard.
See references/vector-search.md for all vector types, functions, and examples.
Full-Text Search (Experimental)
Tantivy-powered FTS with BM25 scoring. Requires fts feature at compile time.
CREATE INDEX fts_idx ON articles USING fts (title, body)
WITH (weights = 'title=2.0,body=1.0');
SELECT title, fts_score(title, body, 'database') as score
FROM articles
WHERE fts_match(title, body, 'database')
ORDER BY score DESC LIMIT 10;
See references/full-text-search.md for tokenizers, query syntax, and highlighting.
Advanced Features
See references/advanced-features.md for:
- Journal modes: WAL (default) and experimental MVCC (
PRAGMA journal_mode = experimental_mvcc) - Encryption: AES-GCM / AEGIS cipher support with
--experimental-encryptionflag - CDC: Real-time change tracking via
PRAGMA unstable_capture_data_changes_conn('full') - Index methods: Custom data access methods (experimental,
--experimental-index-method) - C API: SQLite C API subset with Turso extensions
More from av/skills
tinygrad
Deep learning framework development with tinygrad - a minimal tensor library with autograd, JIT compilation, and multi-device support. Use when writing neural networks, training models, implementing tensor operations, working with UOps/PatternMatcher for graph transformations, or contributing to tinygrad internals. Triggers on tinygrad imports, Tensor operations, nn modules, optimizer usage, schedule/codegen work, or device backends.
19run-llms
Comprehensive guide for setting up and running local LLMs using Harbor. Use when user wants to run LLMs locally, set up or troubleshoot Ollama, Open WebUI, llama.cpp, vLLM, SearXNG, Open Terminal, or similar local AI services. Covers full setup from Docker prerequisites through running models, per-service configuration, VRAM optimization, GPU troubleshooting, web search integration, code execution, profiles, tunnels, and advanced features. Includes decision trees for autonomous agent workflows and step-by-step troubleshooting playbooks.
16preact-buildless-frontend
Build-less ESM frontends that run directly in the browser without bundlers. Use this skill when creating static frontends, SPAs without build tools, prototypes, or when the user explicitly wants no Vite/Webpack/bundler. Covers import maps, CDN imports, cache-busting, hash routing, and performance patterns.
12boost-modules
Create custom modules for [Harbor Boost](https://github.com/av/harbor/tree/main/boost), an optimizing LLM proxy. Use when building Python modules that intercept/transform LLM chat completions—reasoning chains, prompt injection, structured outputs, artifacts, or custom workflows. Triggers on requests to create Boost modules, extend LLM behavior via proxy, or implement chat completion middleware.
8bugbash
Systematically explore and test any software project (CLI, API, Backend, Library, etc.) to find bugs, usability issues, and edge cases. Produces a structured report with full reproduction evidence (exact commands, inputs, logs, and tracebacks) for every issue.
5agent-integration-testing
Use when the user requests integration testing, feature validation, or test plan execution
4