cargo-analytics
Cargo CLI — Analytics
Measurement and export: monitoring run metrics, downloading run and batch results, and exporting segment data.
See
references/response-shapes.mdfor full JSON response structures. Seereferences/troubleshooting.mdfor common errors and how to fix them. Seereferences/examples/run-analytics.mdfor run metrics and error monitoring. Seereferences/examples/exports.mdfor data export and download examples. For billing, usage metrics, and subscription: use thecargo-billingskill.
Prerequisites
npm install -g @cargo-ai/cli
cargo-ai login --oauth # browser sign-in (recommended)
# or: cargo-ai login --token <your-api-token> # workspace-scoped API token (non-interactive)
# Pin a default workspace at login (with --oauth)
cargo-ai login --oauth --workspace-uuid <uuid>
Verify with cargo-ai whoami. All commands output JSON to stdout. Without a global install, prefix every command with npx @cargo-ai/cli instead of cargo-ai.
Failed commands exit non-zero and return {"errorMessage": "..."}.
Discover resources first
Most analytics commands require UUIDs. Discover them before querying.
cargo-ai orchestration play list # all plays (name, workflowUuid)
cargo-ai orchestration tool list # all tools (name, workflowUuid)
cargo-ai orchestration workflow list # all workflows (uuid only — no name)
cargo-ai ai agent list # all agents (uuid, name)
cargo-ai connection connector list # all connectors (uuid, name, integrationSlug)
cargo-ai storage model list # all models (uuid, name, slug)
Quick reference
cargo-ai orchestration run get-metrics --workflow-uuid <uuid>
cargo-ai orchestration run download --workflow-uuid <uuid> --is-finished
cargo-ai orchestration run count --workflow-uuid <uuid> --statuses error
cargo-ai segmentation segment download --model-uuid <uuid> --filter '{"conjonction":"and","groups":[]}'
Workflow run metrics
Aggregated metrics for workflow runs (success/error rates, credits per node).
# Metrics for a workflow
cargo-ai orchestration run get-metrics --workflow-uuid <uuid>
# Scoped to a release, batch, or date range
cargo-ai orchestration run get-metrics --workflow-uuid <uuid> --release-uuid <uuid>
cargo-ai orchestration run get-metrics --workflow-uuid <uuid> --batch-uuid <uuid>
cargo-ai orchestration run get-metrics --workflow-uuid <uuid> \
--created-after <start-date> --created-before <end-date>
Run count
Count runs matching specific criteria — useful for monitoring.
cargo-ai orchestration run count --workflow-uuid <uuid> --statuses error
cargo-ai orchestration run count --workflow-uuid <uuid> --is-finished \
--created-after <start-date> --created-before <end-date>
cargo-ai orchestration run count --workflow-uuid <uuid> --batch-uuid <uuid>
Supports: --statuses, --batch-uuid, --release-uuid, --is-finished, --created-after, --created-before, --record-id, --record-title.
Downloading run results
Two distinct commands — pick the right one for the job.
run download — full run records (metadata + per-node runContext)
Returns each run as a JSON object with status, timing, executions, and runContext.<nodeSlug> containing per-node outputs. Best for debugging or when you need the full execution history.
# All finished runs
cargo-ai orchestration run download --workflow-uuid <uuid> --is-finished
# Date range
cargo-ai orchestration run download --workflow-uuid <uuid> \
--created-after <start-date> --created-before <end-date>
# Specific statuses
cargo-ai orchestration run download --workflow-uuid <uuid> --statuses success,error
# From a specific batch
cargo-ai orchestration run download --workflow-uuid <uuid> --batch-uuid <uuid>
run download-outputs — output of a specific node (CSV/JSON via signed URL)
This is the canonical way to get action results out of the platform. Maps to API POST /v1/orchestration/runs/download-outputs. Returns {"url": "..."} — a signed URL to a CSV (default) or JSON file containing only the output node's data with input/output context. Faster and cheaper than downloading whole run records when you only need the result.
# Required: --workflow-uuid + --output-node-slug
cargo-ai orchestration run download-outputs \
--workflow-uuid <uuid> \
--output-node-slug <slug> \
--format json \
--is-finished
# Filter by batch + status
cargo-ai orchestration run download-outputs \
--workflow-uuid <uuid> \
--output-node-slug <slug> \
--batch-uuid <uuid> \
--statuses finished
To find the output-node-slug: cargo-ai orchestration release get <release-uuid> → look at nodes[].slug. The terminal output node is typically named output or end.
Downloading batch results
cargo-ai orchestration batch download --uuid <batch-uuid> --output-node-slug <node-slug>
To find the output-node-slug: run cargo-ai orchestration release get <release-uuid> (get the release UUID from the batch) and look at nodes[].slug.
Handling partial batch failures
A batch with status: "success" can still contain individual run failures. Always inspect the batch for errors before treating results as complete.
Step 1 — Check the batch summary:
cargo-ai orchestration batch get <batch-uuid>
# → .runsCount = total records submitted
# → .executedRunsCount = records that reached a terminal state (success or error)
# → .failedRunsCount = records that errored
Step 2 — Count errors for the batch:
cargo-ai orchestration run count \
--workflow-uuid <uuid> \
--batch-uuid <batch-uuid> \
--statuses error
Step 3 — Download failed runs to inspect root causes:
cargo-ai orchestration run download \
--workflow-uuid <uuid> \
--batch-uuid <batch-uuid> \
--statuses error
Step 4 — Re-run only the failed records:
After fixing the underlying issue (connector credentials, bad input data, rate limits):
# Extract record IDs from the failed run download, then:
cargo-ai orchestration batch create \
--workflow-uuid <uuid> \
--data '{"kind":"recordIds","recordIds":["id1","id2","id3"]}'
Filtering by node output slug:
To download only a specific node's output from a batch (e.g. just the enrichment node, not the full run):
# 1. Get the release UUID from the batch
cargo-ai orchestration batch get <batch-uuid>
# → .releaseUuid
# 2. Find the node slug
cargo-ai orchestration release get <release-uuid>
# → nodes[].slug
# 3. Download that node's output
cargo-ai orchestration batch download \
--uuid <batch-uuid> \
--output-node-slug <node-slug>
Segment data export
Filter JSON uses conjonction (not conjunction) — this is intentional. See the cargo-orchestration skill's references/filter-syntax.md for the full filter syntax.
# Full export (all records)
cargo-ai segmentation segment download \
--model-uuid <uuid> \
--filter '{"conjonction":"and","groups":[]}'
# With sorting and limit
cargo-ai segmentation segment download \
--model-uuid <uuid> \
--filter '{"conjonction":"and","groups":[]}' \
--sort '[{"columnSlug":"created_at","kind":"desc"}]' \
--limit 1000
IMPORTANT: segment download requires --model-uuid, not --segment-uuid. Get the modelUuid from segment list.
For live paginated queries with enrichment, use segmentation segment fetch from the cargo-orchestration skill.
Help
Every command supports --help:
cargo-ai billing usage get-metrics --help
cargo-ai orchestration run download --help
cargo-ai segmentation segment download --help
More from getcargohq/cargo-skills
cargo-skills
Master skill index for the Cargo CLI. Use this file to understand which skill to load, how the skills relate to each other, and how to chain them together to accomplish end-to-end revenue automation tasks on the Cargo platform.
62cargo-ai
Create and configure AI agents, upload files for RAG, manage MCP servers, and handle agent memories using the Cargo CLI. Use when the user wants to create or update agents, upload knowledge base files, connect MCP tool servers, or manage agent memories. For sending messages to agents, use the cargo-orchestration skill instead.
40cargo-gtm
Front door for any GTM task on Cargo — sourcing, waterfall enrichment, email/phone/LinkedIn lookup, email verification, scoring, qualification, sequencing, CRM sync, and signal monitoring (job changes, funding, tech-stack/hiring intent). Use when the user states a real-world goal involving prospects, leads, accounts, contacts, ICP lists, or campaign activation. Routes to phase guides (Level 2), recipes (Level 2.5), and per-provider playbooks (Level 3) before any action call.
40cargo-storage
Manage models, datasets, columns, and relationships using the Cargo CLI. Use when the user wants to inspect or modify data models, create or update columns, list datasets, set model relationships, or understand the schema of their Cargo workspace.
39cargo-orchestration
Interact with the Cargo platform via CLI. Use when the user wants to execute an action, run a workflow, trigger a batch, message an AI agent, query a data warehouse, fetch segment records, or inspect a model schema.
38cargo-connection
Manage connectors and integrations using the Cargo CLI. Use when the user wants to list, create, update, or remove connectors, discover available integrations, or understand what connector actions are available for use in workflows.
37