wren-query
The user wants to run a Wren CLI command. $ARGUMENTS is the SQL query or instruction.
What to do
-
Check for
~/.wren/mdl.jsonand~/.wren/connection_info.jsonusing Read or Glob.- If either is missing, tell the user what's needed and show the format below.
- If both exist, proceed directly.
-
Run the appropriate command based on what the user asked:
| Intent | Command |
|---|---|
| Execute and return results | uv run wren --sql '...' |
| Translate to native SQL (no DB) | uv run wren dry-plan --sql '...' |
| Validate without fetching rows | uv run wren dry-run --sql '...' |
| Check SQL is valid | uv run wren validate --sql '...' |
If wren is installed globally (not via uv), use wren directly instead of uv run wren.
- Show the result and explain what happened.
Required files
Both files are auto-discovered from ~/.wren/.
mdl.json — semantic model
{
"catalog": "wren",
"schema": "public",
"models": [
{
"name": "orders",
"tableReference": { "schema": "mydb", "table": "orders" },
"columns": [
{ "name": "order_id", "type": "integer" },
{ "name": "total", "type": "double" },
{ "name": "status", "type": "varchar" }
],
"primaryKey": "order_id"
}
]
}
connection_info.json — connection info (include datasource field)
{
"datasource": "mysql",
"host": "localhost",
"port": 3306,
"database": "mydb",
"user": "root",
"password": "secret"
}
Supported datasource values: mysql, postgres, bigquery, snowflake,
clickhouse, trino, mssql, databricks, redshift, oracle, duckdb.
Override flags
When needed, flags can override the defaults:
wren --sql '...' --mdl other-mdl.json --connection-file prod-connection_info.json
wren --sql '...' --output csv # table (default) | csv | json
wren --sql '...' --limit 100
Common errors
| Error | Fix |
|---|---|
mdl.json not found |
Create ~/.wren/mdl.json |
connection_info.json not found |
Create ~/.wren/connection_info.json with a datasource field |
datasource key not found |
Add "datasource": "mysql" to connection_info.json |
unknown datasource 'X' |
Check spelling; see supported values above |
| Connection refused | Confirm the DB is running and host/port are correct |
More from canner/wren-engine
wren-generate-mdl
Generate a Wren MDL project by exploring a database with available tools (SQLAlchemy, database drivers, MCP connectors, or raw SQL). Guides agents through schema discovery, type normalization, and MDL YAML generation using the wren CLI. Use when: user wants to create or set up a new MDL, onboard a new data source, or scaffold a project from an existing database.
18wren-onboarding
Onboard a user to Wren Engine end-to-end. Walks through environment checks, project scaffolding, connection configuration via .env, and first query. Use when: user wants to install Wren Engine, set up a new data source connection, or bootstrap a new project from scratch. Triggers: '/wren-onboarding', 'install wren', 'set up wren engine', 'wren onboarding', 'connect new database to wren'.
4wren-quickstart
End-to-end quickstart for Wren Engine — create a workspace, generate an MDL from a live database, save it as a versioned project, start the Wren MCP Docker container, and verify the setup with a health check. Trigger when a user wants to set up Wren Engine from scratch, onboard a new data source, or get started with Wren MCP. Requires dependent skills already installed (use /wren-usage to install them first).
3wren-mcp-setup
Set up Wren Engine MCP server via Docker and register it with an AI agent. Covers pulling the Docker image, running the container with docker run, mounting a workspace, configuring connection info via the Web UI (with Docker host hint), registering the MCP server in Claude Code (or other MCP clients) using streamable-http transport, and starting a new session to interact with Wren MCP. Trigger when a user wants to run Wren MCP in Docker, configure Claude Code MCP, or connect an AI client to a Dockerized Wren Engine.
3wren-connection-info
Reference guide for Wren Engine connection info — explains required fields for all 18 supported data sources (PostgreSQL, MySQL, BigQuery, Snowflake, ClickHouse, Trino, DuckDB, Databricks, Spark, Athena, Redshift, Oracle, SQL Server, Apache Doris, S3, GCS, MinIO, local files). Covers sensitive field handling, Docker host hints, and BigQuery credential encoding. Use when the user asks how to configure a data source connection or what fields to fill in.
3wren-http-api
Interact with Wren Engine MCP server via plain HTTP JSON-RPC requests — no MCP client SDK required. Covers session initialization, tool discovery, and calling all 20+ Wren tools (query, deploy, metadata, health check) using standard HTTP POST with JSON-RPC 2.0 payloads. Use when the client cannot or prefers not to use the MCP protocol directly (e.g. OpenClaw, custom HTTP clients, shell scripts, or any environment without an MCP SDK).
1