cargo-analytics

Installation
SKILL.md

Cargo CLI — Analytics

Measurement and export: monitoring run metrics, downloading run and batch results, and exporting segment data.

See references/response-shapes.md for full JSON response structures. See references/troubleshooting.md for common errors and how to fix them. See references/examples/run-analytics.md for run metrics and error monitoring. See references/examples/exports.md for data export and download examples. For billing, usage metrics, and subscription: use the cargo-billing skill.

Prerequisites

npm install -g @cargo-ai/cli
cargo-ai login --oauth                                  # browser sign-in (recommended)
# or: cargo-ai login --token <your-api-token>           # workspace-scoped API token (non-interactive)
# Pin a default workspace at login (with --oauth)
cargo-ai login --oauth --workspace-uuid <uuid>

Verify with cargo-ai whoami. All commands output JSON to stdout. Without a global install, prefix every command with npx @cargo-ai/cli instead of cargo-ai.

Failed commands exit non-zero and return {"errorMessage": "..."}.

Discover resources first

Most analytics commands require UUIDs. Discover them before querying.

cargo-ai orchestration play list            # all plays (name, workflowUuid)
cargo-ai orchestration tool list            # all tools (name, workflowUuid)
cargo-ai orchestration workflow list        # all workflows (uuid only — no name)
cargo-ai ai agent list                     # all agents (uuid, name)
cargo-ai connection connector list          # all connectors (uuid, name, integrationSlug)
cargo-ai storage model list                # all models (uuid, name, slug)

Quick reference

cargo-ai orchestration run get-metrics --workflow-uuid <uuid>
cargo-ai orchestration run download --workflow-uuid <uuid> --is-finished
cargo-ai orchestration run count --workflow-uuid <uuid> --statuses error
cargo-ai segmentation segment download --model-uuid <uuid> --filter '{"conjonction":"and","groups":[]}'

Workflow run metrics

Aggregated metrics for workflow runs (success/error rates, credits per node).

# Metrics for a workflow
cargo-ai orchestration run get-metrics --workflow-uuid <uuid>

# Scoped to a release, batch, or date range
cargo-ai orchestration run get-metrics --workflow-uuid <uuid> --release-uuid <uuid>
cargo-ai orchestration run get-metrics --workflow-uuid <uuid> --batch-uuid <uuid>
cargo-ai orchestration run get-metrics --workflow-uuid <uuid> \
  --created-after <start-date> --created-before <end-date>

Run count

Count runs matching specific criteria — useful for monitoring.

cargo-ai orchestration run count --workflow-uuid <uuid> --statuses error
cargo-ai orchestration run count --workflow-uuid <uuid> --is-finished \
  --created-after <start-date> --created-before <end-date>
cargo-ai orchestration run count --workflow-uuid <uuid> --batch-uuid <uuid>

Supports: --statuses, --batch-uuid, --release-uuid, --is-finished, --created-after, --created-before, --record-id, --record-title.

Downloading run results

Two distinct commands — pick the right one for the job.

run download — full run records (metadata + per-node runContext)

Returns each run as a JSON object with status, timing, executions, and runContext.<nodeSlug> containing per-node outputs. Best for debugging or when you need the full execution history.

# All finished runs
cargo-ai orchestration run download --workflow-uuid <uuid> --is-finished

# Date range
cargo-ai orchestration run download --workflow-uuid <uuid> \
  --created-after <start-date> --created-before <end-date>

# Specific statuses
cargo-ai orchestration run download --workflow-uuid <uuid> --statuses success,error

# From a specific batch
cargo-ai orchestration run download --workflow-uuid <uuid> --batch-uuid <uuid>

run download-outputs — output of a specific node (CSV/JSON via signed URL)

This is the canonical way to get action results out of the platform. Maps to API POST /v1/orchestration/runs/download-outputs. Returns {"url": "..."} — a signed URL to a CSV (default) or JSON file containing only the output node's data with input/output context. Faster and cheaper than downloading whole run records when you only need the result.

# Required: --workflow-uuid + --output-node-slug
cargo-ai orchestration run download-outputs \
  --workflow-uuid <uuid> \
  --output-node-slug <slug> \
  --format json \
  --is-finished

# Filter by batch + status
cargo-ai orchestration run download-outputs \
  --workflow-uuid <uuid> \
  --output-node-slug <slug> \
  --batch-uuid <uuid> \
  --statuses finished

To find the output-node-slug: cargo-ai orchestration release get <release-uuid> → look at nodes[].slug. The terminal output node is typically named output or end.

Downloading batch results

cargo-ai orchestration batch download --uuid <batch-uuid> --output-node-slug <node-slug>

To find the output-node-slug: run cargo-ai orchestration release get <release-uuid> (get the release UUID from the batch) and look at nodes[].slug.

Handling partial batch failures

A batch with status: "success" can still contain individual run failures. Always inspect the batch for errors before treating results as complete.

Step 1 — Check the batch summary:

cargo-ai orchestration batch get <batch-uuid>
# → .runsCount          = total records submitted
# → .executedRunsCount  = records that reached a terminal state (success or error)
# → .failedRunsCount    = records that errored

Step 2 — Count errors for the batch:

cargo-ai orchestration run count \
  --workflow-uuid <uuid> \
  --batch-uuid <batch-uuid> \
  --statuses error

Step 3 — Download failed runs to inspect root causes:

cargo-ai orchestration run download \
  --workflow-uuid <uuid> \
  --batch-uuid <batch-uuid> \
  --statuses error

Step 4 — Re-run only the failed records:

After fixing the underlying issue (connector credentials, bad input data, rate limits):

# Extract record IDs from the failed run download, then:
cargo-ai orchestration batch create \
  --workflow-uuid <uuid> \
  --data '{"kind":"recordIds","recordIds":["id1","id2","id3"]}'

Filtering by node output slug:

To download only a specific node's output from a batch (e.g. just the enrichment node, not the full run):

# 1. Get the release UUID from the batch
cargo-ai orchestration batch get <batch-uuid>
# → .releaseUuid

# 2. Find the node slug
cargo-ai orchestration release get <release-uuid>
# → nodes[].slug

# 3. Download that node's output
cargo-ai orchestration batch download \
  --uuid <batch-uuid> \
  --output-node-slug <node-slug>

Segment data export

Filter JSON uses conjonction (not conjunction) — this is intentional. See the cargo-orchestration skill's references/filter-syntax.md for the full filter syntax.

# Full export (all records)
cargo-ai segmentation segment download \
  --model-uuid <uuid> \
  --filter '{"conjonction":"and","groups":[]}'

# With sorting and limit
cargo-ai segmentation segment download \
  --model-uuid <uuid> \
  --filter '{"conjonction":"and","groups":[]}' \
  --sort '[{"columnSlug":"created_at","kind":"desc"}]' \
  --limit 1000

IMPORTANT: segment download requires --model-uuid, not --segment-uuid. Get the modelUuid from segment list.

For live paginated queries with enrichment, use segmentation segment fetch from the cargo-orchestration skill.

Help

Every command supports --help:

cargo-ai billing usage get-metrics --help
cargo-ai orchestration run download --help
cargo-ai segmentation segment download --help
Related skills

More from getcargohq/cargo-skills

Installs
37
GitHub Stars
12
First Seen
10 days ago