query-validation
When to use
- A SQL query is about to be promoted to a production dashboard or report
- A query is returning surprising or incorrect results
- A query is running slowly and needs performance review
- You want to catch anti-patterns (implicit conversions, SELECT *, unbounded CTEs) before they cause incidents
Process
- Lint the query — run
scripts/sql_lint.py(sqlglot-based) to catch syntax errors, unsupported functions for the target engine, and style violations. Fix hard errors before continuing. - Review anti-patterns — compare the query structure against
references/sql_anti_patterns.md. Flag any present anti-patterns with a severity rating. - Parse the explain plan — if an EXPLAIN or query profile output is available, run
scripts/explain_plan_parser.pyto extract slow steps (full table scans, missing indexes, high row estimates). - Estimate cardinality — run
scripts/cardinality_estimator.pyif schema stats are available to flag joins that might fan-out unexpectedly. - Check engine-specific behaviour — consult
references/engine_specific_guide.mdfor the target engine (Snowflake / BigQuery / Postgres / Redshift) to verify date functions, window behaviour, and clustering assumptions. - Produce review output — fill in
assets/query_review_template.mdwith findings; for any performance issues found, completeassets/optimization_recommendations.md.
Inputs the skill needs
- Required: the SQL query text
- Required: target database engine (Snowflake / BigQuery / Postgres / Redshift / other)
- Optional: relevant table schemas (column names, types, approximate row counts)
- Optional: EXPLAIN / query profile output
- Optional: expected business logic — what should the query calculate?
Output
assets/query_review_template.md(filled) — categorised findings: correctness, performance, styleassets/optimization_recommendations.md(filled, if issues found) — ranked rewrite suggestions with expected impact
More from nimrodfisher/data-analytics-skills
funnel-analysis
Conversion funnel analysis with drop-off investigation. Use when analyzing multi-step processes, identifying conversion bottlenecks, comparing segments through a funnel, or optimizing user journeys.
37metric-reconciliation
Cross-source metric validation and discrepancy investigation. Use when metrics from different sources don't match, investigating data quality issues between systems, or validating data migration accuracy.
31insight-synthesis
Transform data findings into compelling insights. Use when converting analysis results into actionable insights, connecting findings to business impact, or preparing insights for stakeholder communication.
31dashboard-specification
Design specifications for effective dashboards. Use when planning new dashboards, improving existing ones, or documenting dashboard requirements before development starts.
30data-quality-audit
Comprehensive data quality assessment against business rules, schema constraints, and freshness expectations. Activate when validating data pipeline outputs before production use, auditing a dataset against defined business rules, or producing a quality scorecard for a data asset.
30root-cause-investigation
Systematic investigation of metric changes and anomalies. Use when a metric unexpectedly changes, investigating business metric drops, explaining performance variations, or drilling into aggregated metric drivers.
30