write-context-rules
write-context-rules
RULES.md is loaded with every message to the nao agent — keep it lean. Two purposes only:
- Orchestrator — point the agent to the right context fast (which metric → which table, which topic → which file, which question type → which skill).
- Broad rules — how to query and how to answer.
Anything else (per-table schema, full metric semantics, domain-specific rules) belongs in a referenced file: databases/<table>.md, semantics/<metric>.yaml, or a domain .md. Reference: docs.getnao.io/nao-agent/context-builder/rules-context.
Standard sections (see templates/RULES.md)
## Business overview— Product + Business model.## Data architecture— Warehouse, data stack, layers, sources.## Core data models—### Most Used Tables(one-line pointers) +### Tables detail(Purpose / Granularity / Key Columns ≤10 / Use For).## Key Metrics Reference— grouped by category;**metric** → table, column, formula.## Date filtering— three example formulas (last X weeks / last X days / current month). Don't enumerate every period.## Analysis Process— 5 subsections: Understand → Select Table → Write Query → Validate → Context.
Flow
Generate section by section. Write each section to RULES.md, show the user, then move on. Don't read everything and write everything in one batch — the user needs to see progress and catch wrong inferences early.
If RULES.md already has content, run the audit-and-fill flow at the bottom instead.
Step 1 — ## Business overview
Sources: web search for the company name/domain (from nao_config.yaml), then databases/ and repos/<dbt>/. Output two paragraphs: Product (what the company does) + Business model (revenue + customer journey).
Step 2 — ## Data architecture
From databases/ and repos/<dbt>/: Warehouse type/project/dataset, Data stack (e.g. dlt, dbt, no semantic layer), Data layers (e.g. bronze / silver / gold), Data sources (numbered list with prefix + one-line description).
Step 3 — ## Core data models
### Most Used Tables — one line per in-scope table:
- `dim_users` — user dimension. See `databases/.../table=dim_users/`.
### Tables detail — per-table block: Purpose, Granularity, Key Columns (cap at 10), Use For. Per-table detail beyond top 10 columns lives in databases/, not here.
Step 4 — ## Key Metrics Reference
Group by category (Revenue / Activity / Conversion). Format:
### Revenue
- **MRR** → `fct_stripe_mrr.mrr_amount`, `SUM(mrr_amount) WHERE status='active'`
If a semantic layer is configured (add-semantic-layer), route through it: **ARR** → query via dbt MCP query_metric (semantic layer).
Step 5 — ## Date filtering (placeholder until Step 7)
Leave a > TODO: filled in via the user-validation step below. Filled in Step 7.
Step 6 — ## Analysis Process
Use the template's 5 subsections verbatim. The project-specific bit is subsection 2 (Select Right Tables): map each major question category to its starting table, derived from Steps 3-4.
Step 7 — Validate metrics with the user
For each metric in ## Key Metrics Reference, ask the user to confirm or correct the source-of-truth pointer. Update in place.
Step 8 — Date filtering, with the user
Two questions decide most of it:
- Week boundary: does a week start Sunday (BigQuery
WEEK) or Monday (ISOWEEK)? Applies to "last week", "last N weeks", week-over-week. - Current period inclusion: when the user says "last 8 weeks" / "last 30 days", include the current incomplete period or exclude it? Rolling-from-now vs. boundary-aligned.
Then: fiscal year start if non-calendar; anything else org-specific.
Write three example formulas only — Last X weeks, Last X days, Current month. The agent extrapolates other periods from these. Each block gets a one-line note above stating the convention used.
-- Last X weeks (Monday-start, excludes current incomplete week)
WHERE date >= DATE_TRUNC(CURRENT_DATE - INTERVAL (X * 7) DAY, ISOWEEK)
AND date < DATE_TRUNC(CURRENT_DATE, ISOWEEK)
Audit-and-fill flow (when RULES.md is not empty)
- Read it. Compare against the six standard sections. Produce a one-line gap report (present / missing / thin per section).
- Ask the user which sections to fill.
- Run only the relevant generation steps above. Show diffs before saving.
For deeper diagnostics (MECE, schema drift, test failure root causes), route to audit-context.
Guardrails
- Section by section, not all-at-once. Show progress, let the user course-correct.
- Show diffs, don't auto-overwrite.
- Don't bloat
RULES.md. Per-table detail indatabases/<table>.md. - Don't invent metric sources. Unclear → list for user validation in Step 7.
## Date filteringkeeps three examples max.
Templates
templates/RULES.md— six-section scaffold. This skill is the only one that writes toRULES.md.
More from getnao/nao
audit-context
Diagnose the health of a nao context at any stage of its lifecycle. Use when the user wants a structured review of what's been synced, how RULES.md compares to the target structure, whether every table is documented, whether the data model is MECE, whether tests exist and what their failures reveal, and whether context files are bloated. Outputs a structured audit report with ranked recommendations. Do not use for first-time setup (setup-context) or routine rule writing (write-context-rules).
26create-context-tests
Generate a test suite of natural-language → SQL pairs that becomes the quality benchmark for a nao agent, then run it via `nao test`. Use when the user wants to start measuring agent reliability, extend an existing test suite, or add tests for new metrics. Tests are the only honest answer to "is the context working?". Do not use for writing rules (write-context-rules) or diagnosing failures (audit-context).
26setup-context
Bootstrap a nao agent for a project — gather warehouse + scope + extra-context info in one round, look up the warehouse-specific config from nao docs, write nao_config.yaml, run nao init + nao sync, set up the LLM key, and generate the first RULES.md. Use when the user has just decided to use nao on a new project. Only for first-time setup; for editing rules, generating tests, or reviewing an existing context, use write-context-rules / create-context-tests / audit-context.
25add-semantic-layer
Wire a semantic layer into a nao agent so that metric queries are routed through a single source of truth. Supports dbt MetricFlow (dbt Cloud with Semantic Layer), Snowflake (views or semantic views via MCP), an in-house nao YAML semantic layer, or other tools (via MCP discovery). Installs the right MCP server, updates RULES.md to route metric queries through the semantic layer, and (for the nao YAML option) generates starter metric files. Use after a first round of tests has shown the agent struggling with metric reliability. Do not use for raw rule writing (write-context-rules) or first-time setup (setup-context).
24