add-semantic-layer
add-semantic-layer
Wire a semantic layer in so it becomes the canonical source of truth for metrics. The agent queries it instead of computing metrics from raw tables.
When to add — and when not to
Only add a semantic layer after nao test shows the agent struggling with metric reliability. Not before.
- Increases reliability and stability (one definition per metric).
- Reduces the scope of answerable questions (anything outside the layer is harder, sometimes impossible).
If failures are concentrated on schema gaps or date logic, a semantic layer doesn't help — fix RULES.md first.
Semantic layer vs metric store: a semantic layer is a file (md/yaml) the agent reads to write its own SQL. A metric store exposes metrics through an API the agent calls (query_metric(...)); the framework converts to SQL. dbt MetricFlow Cloud is a metric store. Snowflake views and nao YAML are semantic layers. Bigger reliability gain with a metric store, bigger scope reduction too.
Step 1 — Pick the tool
| Option | Type | When |
|---|---|---|
| dbt MetricFlow | Metric store | Already running dbt Cloud with the Semantic Layer enabled. |
| Snowflake views / semantic | Semantic layer | Snowflake; using curated views or native semantic views. |
| nao semantic files | Semantic layer | No existing layer. Want a lightweight in-repo YAML. |
| Other | Varies | Looker/LookML, Cube, AtScale, etc. |
Path A — dbt MetricFlow (dbt Cloud with Semantic Layer)
Add to .claude/mcp.json:
{
"mcpServers": {
"dbt-mcp": {
"command": "uvx",
"args": ["dbt-mcp"],
"env": {
"DBT_HOST": "us1.dbt.com",
"MULTICELL_ACCOUNT_PREFIX": "your_prefix",
"DBT_TOKEN": "${DBT_TOKEN}",
"DBT_PROD_ENV_ID": "your_env_id",
"DISABLE_SEMANTIC_LAYER": "false",
"DISABLE_DISCOVERY": "true",
"DISABLE_SQL": "true",
"DISABLE_ADMIN_API": "true",
"DISABLE_REMOTE": "false"
}
}
}
}
Substitute MULTICELL_ACCOUNT_PREFIX, DBT_PROD_ENV_ID, DBT_HOST from the user's dbt Cloud account. Set DBT_TOKEN in their shell, not in the file. Restart the session and verify the MCP connects (list_metrics).
dbt Core (local-only) is not supported here — no metric-store API to route through.
Hand off to write-context-rules: in ## Key Metrics Reference, route each MetricFlow metric through query_metric (e.g. MRR → query via dbt MCP query_metric (semantic layer)). In ## Analysis Process, instruct the agent to use semantic-layer tools for known metrics instead of raw tables.
Path B — Snowflake views / semantic views
{
"mcpServers": {
"snowflake": {
"command": "uvx",
"args": ["mcp-server-snowflake"],
"env": {
"SNOWFLAKE_ACCOUNT": "your_account",
"SNOWFLAKE_USER": "your_user",
"SNOWFLAKE_PASSWORD": "${SNOWFLAKE_PASSWORD}",
"SNOWFLAKE_WAREHOUSE": "your_warehouse",
"SNOWFLAKE_DATABASE": "your_database",
"SNOWFLAKE_SCHEMA": "your_schema",
"SNOWFLAKE_ROLE": "your_role"
}
}
}
}
For native semantic views (Cortex Analyst), use the Cortex MCP variant with SEMANTIC_VIEW set. Verify package + env-var names against the latest docs — auth options (key pair / OAuth / password) vary.
Identify the semantic surface (curated views like analytics.metrics.* or native semantic views). Hand off to write-context-rules: in ## Key Metrics Reference, route each metric to its view, never the underlying tables.
Path C — Other (no obvious MCP)
Search the MCP registry, the tool's docs, and the user's installed MCPs. If a fit exists, configure it following the pattern from paths A-B. If not: fall back to Path D (nao semantic files) or build a thin MCP wrapper.
Path D — nao semantic files
For users with no existing semantic layer. One file: semantics/semantic.yaml holding all dimensions and metrics together. Use templates/semantic.yaml.
Walk through dimensions first (slice axes: date, plan, country — capture name, type, description, and allowed values for categoricals), then top metrics (capture name, definition, source table + column + aggregation, grain, dimensions, filters).
Hand off to write-context-rules: in ## Key Metrics Reference, point every metric at semantics/semantic.yaml.
Validate
- Confirm every metric the user cares about now has a routing rule in
RULES.md. nao chatone of their top questions; confirm the agent uses the semantic layer.nao testand compare to the pre-semantic-layer baseline pass rate. Reliability is the only reason to do this — measure it.
Recommend next step
- No tests yet →
create-context-tests. - Reliability dropped →
audit-context. - Otherwise →
write-context-rulesto refine other sections.
Guardrails
- Only after tests show metric failures. Cite them when the user asks "should we add one?"
- One semantic layer at a time. Two competing layers create MECE violations.
- Don't write
RULES.mddirectly. Hand off towrite-context-rules. - Don't store credentials in
.claude/mcp.json. Use${ENV_VAR}. Add the file to.gitignoreif anything sensitive lands there. - Don't invent metrics for Path D. Only encode what the user defines.
Templates
templates/semantic.yaml— single-file schema for Path D.
More from getnao/nao
write-context-rules
Create or extend a nao project's RULES.md. Owns the RULES.md template. Use when the user wants to generate the initial RULES.md from synced metadata (called by setup-context), or improve their existing RULES.md. Do not use for first-time scope setup (use setup-context) or for diagnosing existing problems (use audit-context).
22audit-context
Diagnose the health of a nao context at any stage of its lifecycle. Use when the user wants a structured review of what's been synced, how RULES.md compares to the target structure, whether every table is documented, whether the data model is MECE, whether tests exist and what their failures reveal, and whether context files are bloated. Outputs a structured audit report with ranked recommendations. Do not use for first-time setup (setup-context) or routine rule writing (write-context-rules).
20create-context-tests
Generate a test suite of natural-language → SQL pairs that becomes the quality benchmark for a nao agent, then run it via `nao test`. Use when the user wants to start measuring agent reliability, extend an existing test suite, or add tests for new metrics. Tests are the only honest answer to "is the context working?". Do not use for writing rules (write-context-rules) or diagnosing failures (audit-context).
20setup-context
Bootstrap a nao agent for a project — gather warehouse + scope + extra-context info in one round, look up the warehouse-specific config from nao docs, write nao_config.yaml, run nao init + nao sync, set up the LLM key, and generate the first RULES.md. Use when the user has just decided to use nao on a new project. Only for first-time setup; for editing rules, generating tests, or reviewing an existing context, use write-context-rules / create-context-tests / audit-context.
19