514-production-rollout-plan
Production Rollout Planning
Run through three stages sequentially: SETUP -> MIGRATION PLANNING -> REVIEW AND SHIP.
If the user provided a project slug as an argument, use it to skip the project selection prompt in Stage 1.
Command safety
Commands fall into three categories:
Guardrailed read-only: 514 agent auth whoami, 514 agent project list, 514 agent deployment list, 514 agent table list, 514 agent materialized-view list, 514 agent sql-resource list, 514 deployment list, 514 env list, 514 env get, git branch --show-current, git status --short, git diff --name-only, gh pr view, moose ls
Raw ClickHouse (require user approval): Any 514 clickhouse query invocation used to inspect current production state or define validation SQL.
Before running any 514 clickhouse query command, use AskUserQuestion to show the user the exact command or SQL and get explicit approval.
Stage 1 — SETUP
Goal: Identify the project, exact target deployment, local change context, and remote Moose auth readiness before migration planning.
-
Verify authentication:
514 agent auth whoami --json -
Resolve the target project:
514 agent project list --json -
Capture the branch and change context:
git branch --show-current git status --short git diff --name-only -
Resolve the target branch before looking up deployments.
First, check whether the current branch has an open pull request:
gh pr view --json baseRefName --jq '.baseRefName' 2>/dev/null- If a PR exists, use its base branch as the target (e.g.,
perf/baseline,main). - If no PR exists, fall back to
main. - Use AskUserQuestion to confirm the resolved target branch with the user before proceeding.
- If a PR exists, use its base branch as the target (e.g.,
-
Resolve the exact target deployment before touching auth or migration generation:
514 deployment list --project <PROJECT> --jsonChoose the target deployment with these rules:
- choose the latest deployment where
branch = <TARGET_BRANCH>andstatus = "Deployed" - if none qualify, or "latest" cannot be determined unambiguously from the returned deployment data, stop and carry a blocker instead of guessing
Capture:
TARGET_DEPLOY_IDTARGET_DEPLOY_BRANCHTARGET_DEPLOY_URL
- choose the latest deployment where
Stage 2 — MIGRATION PLANNING
Goal: Generate the migration plan against the target deployment, review the generated artifacts, classify rollout risk, and produce the rollout plan.
Gather migration artifacts
-
Confirm that migration artifacts exist in
migrations/:File Purpose migrations/plan.yamlOrdered list of migration operations to execute migrations/remote_state.jsonSnapshot of the target deployment's database schema at generation time migrations/local_infra_map.jsonSnapshot of local code schema definitions If any artifact is missing, stop and carry a blocker.
Analyze operations
Operation type reference (see .moose/migration_schema.json for full schema):
| Category | Operation | Effect |
|---|---|---|
| Additive | CreateTable |
New table with columns, order_by, engine |
| Additive | AddTableColumn |
New column on existing table |
| Additive | RenameTableColumn |
Column rename, data preserved |
| Additive | AddTableIndex |
New data-skipping index on existing table |
| Additive | AddTableProjection |
New projection for alternative data ordering |
| Additive | CreateMaterializedView |
New MV with SELECT and target table |
| Additive | CreateView |
New user-defined SELECT view |
| Destructive | DropTable |
Permanently removes table and all data |
| Destructive | DropTableColumn |
Removes column from table |
| Destructive | DropTableIndex |
Removes data-skipping index |
| Destructive | DropTableProjection |
Removes projection from table |
| Destructive | DropMaterializedView |
Removes materialized view |
| Destructive | DropView |
Removes user-defined view |
| Destructive | ModifyTableColumn |
Changes column type/properties (see safe-widening rules below) |
| Metadata | ModifyTableSettings |
Changes table-level settings (e.g., index_granularity) |
| Metadata | ModifySampleBy |
Changes SAMPLE BY expression |
| Metadata | RemoveSampleBy |
Removes SAMPLE BY from table |
| Metadata | ModifyTableTtl |
Changes or removes table-level TTL |
| Custom | RawSql |
Arbitrary SQL array with description |
ModifyTableColumn safe-widening rules:
Not all ModifyTableColumn operations are destructive. Classify each by comparing before_column and after_column:
| Change | Classification | Reason |
|---|---|---|
String → LowCardinality(String) |
Safe | Dictionary encoding wrapper, no data loss |
IntN → wider IntM (e.g., Int32 → Int64) |
Safe | Wider numeric range, existing values preserved |
T → Nullable(T) |
Safe | Adds NULL support, existing non-NULL values preserved |
FloatN → wider FloatM (e.g., Float32 → Float64) |
Safe | Greater precision, existing values preserved |
Adding or changing annotations only (same base type) |
Safe | Metadata-only change |
LowCardinality(String) → String |
Safe | Removes dictionary encoding, data preserved |
Narrowing numeric (e.g., Int64 → Int32) |
Destructive | Values outside target range are truncated |
Nullable(T) → T |
Destructive | NULL values become default, potential data loss |
Type family change (e.g., String → Int64) |
Destructive | Incompatible conversion, potential data loss |
When classifying the overall plan, safe-widening ModifyTableColumn operations count as additive, not destructive.
-
Read
migrations/plan.yamland scan each operation in a single pass. For each operation:Verify intent: Confirm the affected table and column names match the code diff from Stage 1. If a destructive operation is unexpected, use AskUserQuestion to present it and get explicit approval.
Classify: Tag each operation as additive, destructive, or custom. The overall plan is additive if all operations are additive, destructive if any are destructive, mixed if both.
Detect patterns:
-
DropTable+CreateTablefor the same table name = versioned-table upgrade. The generated plan is unsafe as-is because the Drop destroys data before the Create can receive a backfill. Rewrite the plan to the rename-backfill-drop pattern:- Replace the
DropTablewith aRawSqlthat renames the old table:- RawSql: description: Rename old <TABLE> to preserve data for backfill sql: - "RENAME TABLE <TABLE> TO <TABLE>_old" - Keep the
CreateTableas generated (it creates the new schema under the original name). - Insert a
RawSqlbackfill that copies data from the renamed old table to the new table. For each column, compare the old schema (fromremote_state.json) against the newCreateTableDDL:- If type or annotations changed (e.g.,
String→LowCardinality(String)), wrap inCAST(col, 'NewClickHouseType'). - If unchanged, pass through by column name. Present the generated INSERT to the user for approval.
- RawSql: description: Backfill new <TABLE> from renamed old table sql: - "INSERT INTO <TABLE> SELECT <col_list_with_casts> FROM <TABLE>_old" - If type or annotations changed (e.g.,
- Insert a
RawSqlthat drops the renamed old table after backfill:- RawSql: description: Drop old renamed <TABLE> after backfill sql: - "DROP TABLE IF EXISTS <TABLE>_old"
- Replace the
-
ModifyTableColumnwith a narrowing type change = potential data loss. Flag for backfill. -
RawSqlwith backfill intent = confirm the SQL is safe and the description is clear.
Check ordering:
CreateTable(new version) must precede anyRawSqlbackfill that populates itRawSqlbackfill must precede anyDropTablethat destroys its source dataAddTableColumnmust precede anyRawSqlthat references the new columnModifyTableColumnmust not follow aDropTableColumnon the same table if it depends on the dropped column
Check dependencies: For any destructive operation (
DropTable,DropTableColumn,ModifyTableColumn), use AskUserQuestion to ask whether the user knows of applications, materialized views, or downstream consumers that depend on the affected table or column. Carry dependencies forward as blockers.If the plan needs edits (reordering, injecting a
RawSqlbackfill, replacing a drop+add withRenameTableColumn), recommend the specific edit and confirm with the user before modifyingplan.yaml.RawSqlschema contract: When injectingRawSqloperations, thesqlfield must be an array of strings, not a single string. This is defined in.moose/migration_schema.json. TheRawSqloperation has exactly two required fields:sql(array) anddescription(string). Do not addcluster_name,database, or other fields that belong to other operation types.Post-edit validation: After any manual edits to
plan.yaml, read.moose/migration_schema.jsonand verify the edited plan conforms. Check that:- Every
RawSql.sqlis an array of strings - Every operation matches one of the
oneOfvariants in the schema - All required fields are present for each operation type
- No extra fields are added that the schema does not define for that operation
If validation fails, fix the plan before proceeding.
-
-
Verify
migrations/remote_state.jsonwas captured against the correct target deployment. If it does not match the target from Stage 1, stop and carry a blocker. -
Record:
change_type(additive, destructive, or mixed)- which operations are reversible (additive) and which are not (
DropTable,DropTableColumn= permanent data loss;ModifyTableColumnnarrowing = potential data loss) - whether backfill is required, for which tables, and the backfill SQL or strategy
- whether any versioned-table upgrades were detected
- whether auth bootstrap was required
- any plan edits made
- cleanup: old tables, columns, or resources to remove after stabilization
- unknowns — call out explicitly rather than collapsing into a default
Stage 3 — REVIEW AND SHIP
Goal: Present the migration plan for approval. If approved, commit the migration artifacts and create a pull request.
-
Use AskUserQuestion to present the rollout plan and explicitly review:
- the
migrations/plan.yamloperations and whether the classification matches the intended change - any destructive operations and their justification
- any manual edits made to
plan.yaml - blockers
- rollback realism
- any missing validation evidence
Revise the plan until the user approves it or provides a new constraint.
- the
-
Once approved, commit the migration artifacts:
git add migrations/plan.yaml migrations/remote_state.json migrations/local_infra_map.json git commit -m "<descriptive commit message summarizing the migration plan>" -
Create a pull request with the migration plan:
git push -u origin HEAD gh pr create --title "<title>" --body "<body>"The PR body should summarize:
- the change type (additive, destructive, or mixed)
- affected tables
- any versioned-table upgrades and backfill steps
- any dependent consumers flagged during analysis
- blockers or open questions
More from 514-labs/agent-skills
clickhouse-best-practices-ts-py
Use when designing, implementing, reviewing, or optimizing MooseStack data models, ClickHouse schemas, queries, indexes, materialized views, or ingestion configs — including during greenfield authoring when schema-design or performance choices need guidance. When invoked, pick the 1-2 rules directly relevant to the immediate decision and apply them inline while you continue to write code. Do NOT pause to read six rule files and emit a "Rules Checked / Findings" audit template unless the user explicitly asks for a formal review or audit.
19514-cli
>
5514-debug
>
4514-perf-optimize
>
4moose-basics
>
3moosestack-clickhouse-best-practices
MUST USE when reviewing MooseStack data models, ClickHouse schemas, queries, or configurations. Contains 28 rules with MooseStack TypeScript/Python examples. Always read relevant rule files and cite specific rules in responses.
3