turbo-operations
Turbo Pipeline Operations
Lifecycle commands, monitoring, and error reference for running Turbo pipelines. This is a lookup reference — for interactive troubleshooting of a broken pipeline, use /turbo-doctor. For building new pipelines, use /turbo-builder.
Pipeline States
| State | Description |
|---|---|
| running | Pipeline is actively processing data |
| starting | Pipeline is initializing |
| paused | Pipeline is paused (replicas set to 0) |
| stopped | Pipeline is not running (manually stopped) |
| error | Pipeline encountered an error |
| completed | Job-mode pipeline finished processing range |
Streaming vs Job Mode Lifecycle
| Operation | Streaming Pipeline | Job-Mode Pipeline (job: true) |
|---|---|---|
| List | Shows as running/paused |
Shows as running/completed |
| Pause | Supported | Not supported |
| Resume | Supported | Not supported |
| Restart | Supported | Not supported — use delete + apply |
| Delete | Supported | Supported (auto-cleanup ~1hr after done) |
| Apply | Updates in place | Must delete first, then re-apply |
Job-Mode Pipeline Lifecycle
Job-mode pipelines (job: true) are one-time batch processes:
- Start — process data from
start_attoend_block - Run — process the bounded data range
- Complete — automatically stop when range is processed
- Auto-cleanup — ~1 hour after completion, automatically removed
Cannot pause, resume, or restart. Must delete before redeploying.
Lifecycle Commands
List Pipelines
goldsky turbo list
Pause a Pipeline
Temporarily stop processing without deleting. Preserves all state for later resumption.
goldsky turbo pause <pipeline-name>
Resume a Pipeline
Restore a paused pipeline. Can only resume a paused pipeline.
goldsky turbo resume <pipeline-name>
Restart a Pipeline
Trigger a pod restart for a running or paused pipeline.
goldsky turbo restart <pipeline-name>
# To clear all checkpoints and reprocess from the beginning:
goldsky turbo restart <pipeline-name> --clear-state
Delete a Pipeline
Permanently remove a pipeline. All checkpoints are lost. Data already written to sinks is preserved.
goldsky turbo delete <pipeline-name>
Delete and Recreate (Fresh Start)
goldsky turbo delete my-pipeline
goldsky turbo apply my-pipeline.yaml
Checkpoint Behavior
- Deleting a pipeline removes its checkpoints permanently
- Recreating with the same name starts fresh (no checkpoint recovery)
- To preserve checkpoints, use
applyto update instead of delete/recreate - Checkpoint state is tied to source names — renaming a source resets its checkpoint
- Checkpoint state is tied to pipeline names — renaming a pipeline resets all checkpoints
Monitoring Commands
| Action | Command |
|---|---|
| List pipelines | goldsky turbo list |
| View live data | goldsky turbo inspect <name> -p |
| Inspect specific node | goldsky turbo inspect <name> -n <node> -p |
| View logs | goldsky turbo logs <name> |
| Follow logs | goldsky turbo logs <name> --follow |
| Logs with timestamps | goldsky turbo logs <name> --timestamps |
| Last N lines | goldsky turbo logs <name> --tail N |
| Logs since N seconds ago | goldsky turbo logs <name> --since N |
goldsky turbo inspect Flags
| Flag | Short | Description |
|---|---|---|
--print |
-p |
Print records to stdout |
--topology-node-keys |
-n |
Comma-separated node keys to filter (e.g. a transform name) |
--buffer-size |
-b |
Max records to keep in buffer (default: 10000) |
Always use -p. Always include it in every inspect command you suggest.
Log Analysis Script
Use the helper script to quickly analyze pipeline logs:
./scripts/analyze-logs.sh <pipeline-name>
./scripts/analyze-logs.sh <pipeline-name> --tail 100
The script checks for common error patterns and reports findings with recommendations.
Common Error Patterns
Detailed error patterns and solutions are in
data/error-patterns.json.
| Error Pattern | Likely Cause | Fix |
|---|---|---|
connection refused |
Database unreachable | Check network/firewall settings |
authentication failed |
Wrong credentials | Update secret with correct credentials |
secret not found |
Missing secret | Create secret with goldsky secret create |
SQL syntax error |
Invalid transform SQL | Fix SQL in YAML and redeploy |
duplicate key |
Primary key collision | Ensure unique primary key in transform |
script transform error |
TypeScript runtime failure | Check script logic, null handling, return types |
dynamic_table error |
Backend connection issue | Verify dynamic table secret/table exists |
WASM execution failed |
Script crash in sandbox | Debug script — check for undefined access |
handler timeout |
External HTTP endpoint slow | Increase timeout_ms or fix handler endpoint |
Script Transform Issues
| Issue | Fix |
|---|---|
undefined property access |
Add null checks: input.field ?? '' |
| Wrong return type | Ensure returned object matches schema exactly |
| Missing return fields | All schema fields must be present in returned object |
invoke is not a function |
Ensure script defines function invoke(data) |
| BigInt errors | Use BigInt() constructor, not direct number literals |
Dynamic Table Issues
| Issue | Fix |
|---|---|
| Table not found | Create the table in PostgreSQL before deploying |
| No matches from check | Verify data exists in the backing table |
| Stale data | For postgres backend, verify rows are actually there |
| Memory pressure | Large in_memory tables → switch to postgres backend |
Troubleshooting Quick Reference
| Symptom | Likely Cause | Quick Fix |
|---|---|---|
| No data flowing | start_at: latest |
Wait for new data or use earliest |
| Auth failed | Wrong credentials | Update secret with correct password |
| Connection refused | Network/firewall | Check host, whitelist Goldsky IPs |
| Storage exceeded | Neon free tier (512MB) | Upgrade plan or clear data |
| SQL error | Bad transform syntax | Validate YAML first |
| Pipeline not found | Name mismatch | Run goldsky turbo list to check names |
| Permission denied | Role insufficient | Verify Editor or Admin role in the project |
pipeline already exists |
Job-mode stale | Delete first, then re-apply |
| Cannot pause/resume job | Job-mode limitation | Job pipelines don't support pause/resume |
| Cannot restart job | Job-mode limitation | Delete + re-apply instead |
| Can't connect to inspect | Pipeline not running | Check status with goldsky turbo list |
| Logs are empty | Pipeline just started | Wait for data or check start_at |
| TUI disconnects | Pipeline interrupted | Auto-reconnects within 30 min; check status |
Related
/turbo-doctor— Interactive diagnostic skill for pipeline issues/turbo-builder— Build and deploy new pipelines/turbo-pipelines— YAML configuration and architecture reference
More from goldsky-io/goldsky-agent
turbo-builder
Build and deploy new Goldsky Turbo pipelines from scratch. Triggers on: 'build a pipeline', 'index X on Y chain', 'set up a pipeline', 'track transfers to postgres', or any request describing data to move from a chain/contract to a destination (postgres, clickhouse, kafka, s3, webhook). Covers the full workflow: requirements → dataset selection → YAML generation → validation → deploy. Not for debugging (use /turbo-doctor) or syntax lookups (use /turbo-pipelines).
39turbo-pipelines
Turbo pipeline YAML reference and architecture guide. Covers: YAML field syntax (start_at, from, version, primary_key), source/transform/sink configuration, validation errors, resource sizing (xs–xxl), architecture decisions (dataset vs kafka, streaming vs job, fan-out vs fan-in, sink selection, pipeline splitting). Triggers on: 'what does field X do', 'what fields does a postgres sink need', 'what resource size', 'should I use kafka or dataset', 'how to structure my pipeline'. For writing transforms, use /turbo-transforms. For end-to-end building, use /turbo-builder.
39secrets
Use this skill when a user wants to store, manage, or work with Goldsky secrets — the named credential objects used by pipeline sinks. This includes: creating a new secret from a connection string or credentials, listing or inspecting existing secrets, updating or rotating credentials after a password change, and deleting secrets that are no longer needed. Trigger for any query where the user mentions 'goldsky secret', wants to securely store database credentials for a pipeline, or is working with sink authentication for PostgreSQL, Neon, Supabase, ClickHouse, Kafka, S3, Elasticsearch, DynamoDB, SQS, OpenSearch, or webhooks.
34datasets
Use this skill when the user needs to look up or verify Goldsky blockchain dataset names, chain prefixes, dataset types, or versions. Triggers on questions like 'what\\'s the dataset name for X?', 'what prefix does Goldsky use for chain Y?', 'what version should I use for Z?', or 'what datasets are available for Solana/Stellar/Arbitrum/etc?'. Also use for chain-specific dataset questions (e.g., polygon vs matic prefix, stellarnet balance datasets, solana token transfer dataset names). Do NOT trigger for questions about CLI commands, pipeline setup, or general Goldsky architecture unless the core question is about finding the right dataset name or chain prefix.
34turbo-doctor
Diagnose and fix broken Goldsky Turbo pipelines interactively. Triggers on: pipeline in error state, stuck starting, connection refused, not getting data, duplicate rows, missing fields, slow backfill, or any named pipeline misbehaving. Runs logs/status commands, identifies root cause, and offers fixes. For CLI syntax or error pattern lookup without an active problem, use /turbo-operations instead.
34turbo-transforms
Write SQL, TypeScript, and dynamic table transforms for Turbo pipelines. Covers: decoding EVM logs with _gs_log_decode, filtering/casting blockchain data, UNION ALL for combining events, TypeScript/WASM transforms (invoke function), dynamic lookup tables (dynamic_table_check), transform chaining, and Solana decoding. Triggers on: 'decode Transfer events', 'write a SQL transform', 'filter by contract', 'TypeScript transform', 'dynamic table', 'UNION ALL'. For pipeline YAML structure, use /turbo-pipelines. For end-to-end building, use /turbo-builder.
33