wren-connection-info
Wren Connection Info Reference
Connection info can only be configured through the MCP server Web UI at http://localhost:9001. Do not attempt to set it programmatically. Always direct the user to the Web UI.
Supported data sources
| Value | Database | Fields reference |
|---|---|---|
POSTGRES |
PostgreSQL | databases.md |
MYSQL |
MySQL / MariaDB | databases.md |
MSSQL |
SQL Server | databases.md |
CLICKHOUSE |
ClickHouse | databases.md |
ORACLE |
Oracle | databases.md |
DORIS |
Apache Doris | databases.md |
REDSHIFT |
Amazon Redshift | databases.md |
TRINO |
Trino | databases.md |
BIGQUERY |
Google BigQuery | databases.md |
SNOWFLAKE |
Snowflake | databases.md |
DUCKDB |
DuckDB | databases.md |
ATHENA |
AWS Athena | databases.md |
DATABRICKS |
Databricks | databases.md |
SPARK |
Apache Spark | databases.md |
S3_FILE |
Amazon S3 | file-sources.md |
GCS_FILE |
Google Cloud Storage | file-sources.md |
MINIO_FILE |
MinIO | file-sources.md |
LOCAL_FILE |
Local files | file-sources.md |
Read the linked reference file for the user's data source to get required fields, default ports, and setup notes.
Common patterns
Most database connectors need: host, port, user, password, database.
Exceptions:
- BigQuery — uses
project_id,dataset_id,credentials(base64-encoded). See databases.md for encoding instructions. - Snowflake — uses
accountinstead ofhost, plusschema. - Trino — needs
catalogandschemainstead ofdatabase. - Databricks — uses
serverHostname,httpPath,accessToken(or service principal withclientId,clientSecret). - Spark — only
hostandport(Spark Connect protocol, no auth fields). - File sources — use
url,format, plus bucket/credentials. See file-sources.md.
Docker host hint
If the database runs on the host machine and Wren Engine runs inside Docker, localhost cannot reach the host. Use host.docker.internal instead:
| Original | Inside Docker |
|---|---|
localhost |
host.docker.internal |
127.0.0.1 |
host.docker.internal |
| Cloud/remote hostname | No change needed |
Sensitive fields
Never log, display, or pass sensitive values through the AI agent unnecessarily.
| Connector | Sensitive fields |
|---|---|
| Postgres / MySQL / MSSQL / ClickHouse / Oracle / Doris / Redshift | password |
| BigQuery | credentials |
| Snowflake | password |
| Athena | aws_access_key_id, aws_secret_access_key |
| Databricks (token) | accessToken |
| Databricks (service principal) | clientId, clientSecret |
| S3 / MinIO | access_key, secret_key |
| GCS | key_id, secret_key, credentials |
| Trino / Spark / Local files | (none) |
More from canner/wren-engine
wren-usage
Wren Engine CLI workflow guide for AI agents. Answer data questions end-to-end using the wren CLI: gather schema context, recall past queries, write SQL through the MDL semantic layer, execute, and learn from confirmed results. Use when: user asks a data question, requests a report or analysis, asks about metrics, revenue, customers, orders, trends, or any business data; user says 'how many', 'show me', 'what is the', 'top N', 'compare', 'trend', 'growth', 'breakdown'; user wants to explore, analyze, filter, aggregate, or summarize data from a database; agent needs to query data, connect a data source, handle errors, or manage MDL changes via the wren CLI.
18wren-generate-mdl
Generate a Wren MDL project by exploring a database with available tools (SQLAlchemy, database drivers, MCP connectors, or raw SQL). Guides agents through schema discovery, type normalization, and MDL YAML generation using the wren CLI. Use when: user wants to create or set up a new MDL, onboard a new data source, or scaffold a project from an existing database.
18