turbo-doctor
Pipeline Doctor
Boundaries
- Diagnose and fix EXISTING pipeline problems interactively.
- Do not build new pipelines — that belongs to
/turbo-builder. - Do not serve as a command reference. If the user only needs CLI syntax or error pattern lookup, use the
/turbo-operationsskill instead.
Systematically identify and resolve pipeline issues by following a structured diagnostic workflow.
Mode Detection
Before running any commands, check if you have the Bash tool available:
- If Bash is available (CLI mode): Execute commands directly and parse output.
- If Bash is NOT available (reference mode): Output commands for the user to run. Ask them to paste the output back so you can analyze it and provide recommendations.
Diagnostic Workflow
Follow these steps in order. Do not skip steps — each builds on the previous one.
Step 1: Verify Authentication
Run goldsky project list 2>&1 to check login status.
- If logged in: Note the current project and continue.
- If not logged in: Tell the user they need to authenticate. Use the
/auth-setupskill for guidance. Do not proceed until auth is confirmed.
Step 2: Identify the Pipeline
Run goldsky turbo list to show all pipelines.
Ask the user which pipeline they want to diagnose. If they already named one, confirm it exists in the list.
Note the pipeline's current status (running, paused, error, completed, starting).
Step 3: Analyze Pipeline Status
Based on the status:
- running — Pipeline is active. Check if the issue is data quality, latency, or unexpected behavior. Proceed to Step 4.
- error — Pipeline has failed. This is the most common case. Proceed to Step 4 for log analysis.
- paused — Pipeline was manually paused. Ask if they want to resume it.
- starting — Pipeline is initializing. Ask how long it's been starting. If >10 minutes, check logs.
- completed — Job-mode pipeline finished. Ask what the expected vs actual behavior was.
Step 4: Examine Logs
Run goldsky turbo logs <pipeline-name> --tail 100 2>&1 to get recent logs.
Analyze the output for known error patterns. Reference the error patterns in the /turbo-operations skill, including:
- Connection errors — sink unreachable, auth failed, timeout
- Schema errors — column mismatch, type mismatch, missing columns
- Resource errors — OOM, disk full, rate limiting
- Data errors — deserialization failures, invalid block ranges
- Configuration errors — invalid YAML, unknown dataset, bad transform
Step 5: Check Secrets (if applicable)
If logs show connection or authentication errors:
Run goldsky secret list to verify all required secrets exist.
Cross-reference with the pipeline YAML if available. Use the /secrets skill for guidance on creating or updating secrets.
Step 6: Provide Diagnosis
Present your findings in this format:
## Diagnosis
**Pipeline:** [name]
**Status:** [status]
**Issue:** [one-line summary]
**Root cause:**
[Detailed explanation of what's wrong]
**Evidence:**
- [Log line or observation 1]
- [Log line or observation 2]
**Recommended fix:**
1. [Step 1]
2. [Step 2]
**Prevention:**
[How to avoid this in the future]
Step 7: Offer to Fix
If the fix involves CLI commands (restart, update secrets, redeploy), offer to execute them. Always confirm with the user before making changes.
Common fixes:
- Restart:
goldsky turbo restart <name>(or--clear-statefor a fresh start) - Update secret:
goldsky secret create <name> --value <new-value>(secrets are immutable — recreate to update) - Redeploy:
goldsky turbo delete <name>thengoldsky turbo apply <file.yaml> - Resume:
goldsky turbo resume <name>(for paused pipelines)
Important Rules
- Never guess at the problem. Always check logs and status first.
- If you're unsure, say so and suggest what additional information would help.
- For job-mode pipelines: remember they cannot be paused, resumed, or restarted — only deleted and redeployed.
- Always ask before running destructive commands (delete, restart --clear-state).
- If the issue is beyond what the CLI can diagnose, suggest contacting Goldsky support with the specific error messages.
Related
/turbo-operations— CLI commands, lifecycle operations, and error pattern reference/turbo-builder— Build and deploy new pipelines/turbo-pipelines— YAML configuration and architecture reference/secrets— Manage sink credentials
More from goldsky-io/goldsky-agent
turbo-builder
Build and deploy new Goldsky Turbo pipelines from scratch. Triggers on: 'build a pipeline', 'index X on Y chain', 'set up a pipeline', 'track transfers to postgres', or any request describing data to move from a chain/contract to a destination (postgres, clickhouse, kafka, s3, webhook). Covers the full workflow: requirements → dataset selection → YAML generation → validation → deploy. Not for debugging (use /turbo-doctor) or syntax lookups (use /turbo-pipelines).
39turbo-pipelines
Turbo pipeline YAML reference and architecture guide. Covers: YAML field syntax (start_at, from, version, primary_key), source/transform/sink configuration, validation errors, resource sizing (xs–xxl), architecture decisions (dataset vs kafka, streaming vs job, fan-out vs fan-in, sink selection, pipeline splitting). Triggers on: 'what does field X do', 'what fields does a postgres sink need', 'what resource size', 'should I use kafka or dataset', 'how to structure my pipeline'. For writing transforms, use /turbo-transforms. For end-to-end building, use /turbo-builder.
39secrets
Use this skill when a user wants to store, manage, or work with Goldsky secrets — the named credential objects used by pipeline sinks. This includes: creating a new secret from a connection string or credentials, listing or inspecting existing secrets, updating or rotating credentials after a password change, and deleting secrets that are no longer needed. Trigger for any query where the user mentions 'goldsky secret', wants to securely store database credentials for a pipeline, or is working with sink authentication for PostgreSQL, Neon, Supabase, ClickHouse, Kafka, S3, Elasticsearch, DynamoDB, SQS, OpenSearch, or webhooks.
34datasets
Use this skill when the user needs to look up or verify Goldsky blockchain dataset names, chain prefixes, dataset types, or versions. Triggers on questions like 'what\\'s the dataset name for X?', 'what prefix does Goldsky use for chain Y?', 'what version should I use for Z?', or 'what datasets are available for Solana/Stellar/Arbitrum/etc?'. Also use for chain-specific dataset questions (e.g., polygon vs matic prefix, stellarnet balance datasets, solana token transfer dataset names). Do NOT trigger for questions about CLI commands, pipeline setup, or general Goldsky architecture unless the core question is about finding the right dataset name or chain prefix.
34turbo-transforms
Write SQL, TypeScript, and dynamic table transforms for Turbo pipelines. Covers: decoding EVM logs with _gs_log_decode, filtering/casting blockchain data, UNION ALL for combining events, TypeScript/WASM transforms (invoke function), dynamic lookup tables (dynamic_table_check), transform chaining, and Solana decoding. Triggers on: 'decode Transfer events', 'write a SQL transform', 'filter by contract', 'TypeScript transform', 'dynamic table', 'UNION ALL'. For pipeline YAML structure, use /turbo-pipelines. For end-to-end building, use /turbo-builder.
33auth-setup
Set up Goldsky CLI authentication and project configuration. Use this skill when the user needs to: install the goldsky CLI (what's the official install command?), run goldsky login (including when the browser opens but 'authentication failed'), run goldsky project list and see 'not logged in' or 'unauthorized', switch between Goldsky projects, check which project they're currently authenticated to, or fix 'unauthorized' errors when running goldsky turbo commands. Also use for 'walk me through setting up goldsky CLI from scratch for the first time'. If any other Goldsky skill hits an auth error, redirect here first.
32