analysis-assumptions-log
Analysis Assumptions Log
When to use
- Starting an analysis with significant scope, method, or data quality choices
- Preparing work for peer review or stakeholder sign-off
- Returning to an old analysis and needing to understand prior decisions
- Working in a regulated environment where auditability is required
- Handing off an analysis to another analyst
Process
- Initialize the log — create a log entry for the analysis with its name, date, analyst, and the decision it informs. Use
scripts/assumptions_tracker.pyto initialise a structured JSON log. - Enumerate data assumptions — document representativeness, completeness, how missing values are handled, and any known quality issues. For each assumption, record the rationale and confidence level (high/medium/low). See
references/assumption_categories.mdfor the full taxonomy. - Enumerate business logic assumptions — record metric definitions, time windows, inclusion/exclusion rules, and any definitions provided by stakeholders. Note alternatives considered.
- Enumerate statistical assumptions — record distribution assumptions, independence claims, stationarity, or model assumptions relevant to the methods used.
- Assess impact and flag critical assumptions — for each low-confidence assumption with high impact if wrong, create a validation plan. Run
scripts/assumptions_tracker.py --reportto surface the critical list. - Validate and close — as validation occurs, update the log with results. Export
assets/assumptions_log_template.mdfor peer review sign-off before delivery.
Inputs the skill needs
- Analysis name and the decision it informs
- Data sources, time period, and population being analysed
- Key methodological choices made (and alternatives considered)
- Stakeholder-provided business rule definitions
- Any known data quality issues
Output
scripts/assumptions_tracker.py— CLI tool to log assumptions, flag critical ones, and export a summaryassets/assumptions_log_template.md— completed log for peer review and audit trail
More from nimrodfisher/data-analytics-skills
funnel-analysis
Conversion funnel analysis with drop-off investigation. Use when analyzing multi-step processes, identifying conversion bottlenecks, comparing segments through a funnel, or optimizing user journeys.
37metric-reconciliation
Cross-source metric validation and discrepancy investigation. Use when metrics from different sources don't match, investigating data quality issues between systems, or validating data migration accuracy.
31insight-synthesis
Transform data findings into compelling insights. Use when converting analysis results into actionable insights, connecting findings to business impact, or preparing insights for stakeholder communication.
31dashboard-specification
Design specifications for effective dashboards. Use when planning new dashboards, improving existing ones, or documenting dashboard requirements before development starts.
30data-quality-audit
Comprehensive data quality assessment against business rules, schema constraints, and freshness expectations. Activate when validating data pipeline outputs before production use, auditing a dataset against defined business rules, or producing a quality scorecard for a data asset.
30root-cause-investigation
Systematic investigation of metric changes and anomalies. Use when a metric unexpectedly changes, investigating business metric drops, explaining performance variations, or drilling into aggregated metric drivers.
30