secrets
Goldsky Secrets Management
Create and manage secrets for pipeline sink credentials.
Agent Instructions
When this skill is invoked, follow this streamlined workflow:
Step 1: Verify Login + List Existing Secrets
Run goldsky secret list to confirm authentication and show existing secrets.
If authentication fails: Invoke the auth-setup skill first.
Step 2: Determine Intent Quickly
Skip unnecessary questions. If the user's intent is clear from context, proceed directly:
- User says "create a postgres secret" → Go straight to credential collection
- User pastes a connection string → Parse it immediately (see Connection String Parsing)
- User mentions a specific provider (Neon, Supabase, etc.) → Use provider-specific guidance
Only use AskUserQuestion if intent is genuinely unclear.
Step 3: Connection String Parsing (Preferred for PostgreSQL)
If user provides a connection string, parse it directly instead of asking questions.
PostgreSQL connection string format:
postgres://USER:PASSWORD@HOST:PORT/DATABASE?sslmode=require
postgresql://USER:PASSWORD@HOST/DATABASE
Parsing logic:
- Extract:
user,password,host,port(default 5432),databaseName - Construct JSON immediately
- Create the secret without further questions
Example - user provides:
postgresql://neondb_owner:abc123@ep-cool-name.us-east-2.aws.neon.tech/neondb?sslmode=require
Create using the connection string directly:
goldsky secret create --name SUGGESTED_NAME
# When prompted, paste the connection string:
# postgresql://neondb_owner:abc123@ep-cool-name.us-east-2.aws.neon.tech/neondb?sslmode=require
Step 4: Provider-Specific Quick Paths
Neon:
- Connection string format:
postgresql://USER:PASS@ep-XXX.REGION.aws.neon.tech/neondb - Default port: 5432
- Common issue: Free tier has 512MB limit - pipelines will fail with "project size limit exceeded"
Supabase:
- Connection string format:
postgresql://postgres:PASS@db.PROJECT.supabase.co:5432/postgres - Use the "Connection string" from Project Settings → Database
PlanetScale (MySQL):
- Use
"protocol": "mysql"and port 3306
Step 5: Create Secret Directly
Once you have credentials (from parsing or user input), create immediately:
goldsky secret create \
--name SECRET_NAME \
--value '{"type":"jdbc","protocol":"postgres",...}' \
--description "Optional description"
Naming convention: PROJECT_PROVIDER (e.g., TRADEWATCH_NEON, ANALYTICS_SUPABASE)
Step 6: Verify
Run goldsky secret list to confirm creation.
Secret JSON Schemas
JSON schema files are available in the
schemas/folder. Each file contains the full schema with examples.
| Secret Type | Schema File | Type Field | Use Case |
|---|---|---|---|
| PostgreSQL | postgres.json |
jdbc |
Database sink |
| MySQL | postgres.json |
jdbc |
Database sink (protocol: mysql) |
| ClickHouse | clickhouse.json |
clickHouse |
Analytics database |
| Kafka | kafka.json |
kafka |
Event streaming |
| AWS S3 | s3.json |
s3 |
Object storage |
| ElasticSearch | elasticsearch.json |
elasticSearch |
Search engine |
| DynamoDB | dynamodb.json |
dynamodb |
NoSQL database |
| SQS | sqs.json |
sqs |
Message queue |
| OpenSearch | opensearch.json |
opensearch |
Search/analytics |
| Webhook | webhook.json |
httpauth |
HTTP endpoints |
Schema location: schemas/ (relative to this skill's directory)
Quick Reference Examples
PostgreSQL — Connection string format:
postgres://username:password@host:port/database
goldsky secret create --name MY_POSTGRES_SECRET
# The CLI will prompt for the connection string interactively
ClickHouse — Connection string format:
https://username:password@host:port/database
Kafka — JSON format:
{
"type": "kafka",
"bootstrapServers": "broker:9092",
"securityProtocol": "SASL_SSL",
"saslMechanism": "PLAIN",
"saslJaasUsername": "user",
"saslJaasPassword": "pass"
}
S3 — Colon-separated format:
access_key_id:secret_access_key
Or with session token: access_key_id:secret_access_key:session_token
Webhook:
Note: Turbo pipeline webhook sinks do not support Goldsky's native secrets management. Include auth headers directly in the pipeline YAML
headers:field instead.
Connection String Parser
For PostgreSQL, use the helper script to parse connection strings:
./scripts/parse-connection-string.sh "postgresql://user:pass@host:5432/dbname"
# Output: JSON ready for goldsky secret create --value
Step 5: Confirm and Create
Show the user what will be created (mask password with ***) and ask for confirmation before running the command.
Step 6: Verify Success
Run goldsky secret list to confirm the secret was created.
Quick Reference
| Action | Command |
|---|---|
| Create | goldsky secret create --name NAME --value "value" |
| List | goldsky secret list |
| Reveal | goldsky secret reveal NAME |
| Update | goldsky secret update NAME --value "new-value" |
| Delete | goldsky secret delete NAME |
Prerequisites
- Goldsky CLI installed
- Logged in (
goldsky login) - Connection credentials for your target sink
Why Secrets Are Needed
Pipelines that write to external sinks (PostgreSQL, ClickHouse, Kafka, S3) need credentials to connect. Instead of putting credentials directly in your pipeline YAML, you store them as secrets and reference them by name.
Benefits:
- Credentials are encrypted and stored securely
- Pipeline configs can be shared without exposing secrets
- Credentials can be rotated without modifying pipelines
Command Reference
| Command | Purpose | Key Flags |
|---|---|---|
goldsky secret create |
Create a new secret | --name, --value, --description |
goldsky secret list |
List all secrets | |
goldsky secret reveal <name> |
Show secret value | |
goldsky secret update <name> |
Update secret value | --value, --description |
goldsky secret delete <name> |
Delete a secret | -f (force, skip confirmation) |
Common Patterns
PostgreSQL Secret
goldsky secret create --name PROD_POSTGRES
# When prompted, provide the connection string:
# postgres://admin:secret@db.example.com:5432/mydb
Pipeline usage:
sinks:
output:
type: postgres
from: my_source
schema: public
table: transfers
secret_name: PROD_POSTGRES
ClickHouse Secret
goldsky secret create --name CLICKHOUSE_ANALYTICS
# When prompted, provide the connection string:
# https://default:secret@abc123.clickhouse.cloud:8443/analytics
Pipeline usage:
sinks:
output:
type: clickhouse
from: my_source
table: events
secret_name: CLICKHOUSE_ANALYTICS
primary_key: id
Rotating Credentials
Update an existing secret without changing pipeline configs:
goldsky secret update MY_POSTGRES_SECRET --value 'postgres://admin:NEW_PASSWORD@db.example.com:5432/mydb'
Active pipelines will pick up the new credentials on their next connection.
Deleting Unused Secrets
# With confirmation prompt
goldsky secret delete OLD_SECRET
# Skip confirmation (for scripts)
goldsky secret delete OLD_SECRET -f
Warning: Deleting a secret that's in use will cause pipeline failures.
Secret Naming Conventions
Use descriptive, uppercase names with underscores:
| Good | Bad |
|---|---|
PROD_POSTGRES_MAIN |
secret1 |
STAGING_CLICKHOUSE |
my-secret |
KAFKA_PROD_CLUSTER |
postgres |
Include environment and purpose in the name for clarity.
Troubleshooting
Error: Secret not found
Error: Secret 'MY_SECRET' not found
Cause: The secret name doesn't exist or is misspelled.
Fix: Run goldsky secret list to see available secrets and check the exact name.
Error: Secret already exists
Error: Secret 'MY_SECRET' already exists
Cause: Attempting to create a secret with a name that's already in use.
Fix: Use goldsky secret update MY_SECRET --value "new-value" to update, or choose a different name.
Error: Invalid secret value format
Error: Invalid JSON in secret value
Cause: JSON syntax error in the secret value.
Fix: Validate your JSON before creating the secret:
# Test JSON validity
echo '{"url":"...","user":"..."}' | jq .
Pipeline fails with "connection refused"
Cause: The credentials in the secret are incorrect or the database is unreachable.
Fix:
- Verify credentials work outside Goldsky:
psql "postgresql://..." - Check the secret value:
goldsky secret reveal MY_SECRET - Ensure the database allows connections from Goldsky's IP ranges
Pipeline fails with "authentication failed"
Cause: Username or password in the secret is incorrect. Fix: Update the secret with correct credentials:
goldsky secret update MY_SECRET --value 'postgres://correct:credentials@host:5432/db'
Secret value contains special characters
Cause: JSON strings with special characters need proper escaping. Fix: Use proper JSON escaping for special characters in password fields:
- Backslash: use
\\ - Double quote: use
\" - Newline: use
\n
With the structured JSON format, most special characters in passwords work without URL encoding since the password is a separate field.
Related
/turbo-builder— Build and deploy pipelines that use these secrets/auth-setup— Invoke this if user is not logged in/turbo-pipelines— Pipeline YAML configuration reference
More from goldsky-io/goldsky-agent
turbo-builder
Build and deploy new Goldsky Turbo pipelines from scratch. Triggers on: 'build a pipeline', 'index X on Y chain', 'set up a pipeline', 'track transfers to postgres', or any request describing data to move from a chain/contract to a destination (postgres, clickhouse, kafka, s3, webhook). Covers the full workflow: requirements → dataset selection → YAML generation → validation → deploy. Not for debugging (use /turbo-doctor) or syntax lookups (use /turbo-pipelines).
38turbo-pipelines
Turbo pipeline YAML reference and architecture guide. Covers: YAML field syntax (start_at, from, version, primary_key), source/transform/sink configuration, validation errors, resource sizing (xs–xxl), architecture decisions (dataset vs kafka, streaming vs job, fan-out vs fan-in, sink selection, pipeline splitting). Triggers on: 'what does field X do', 'what fields does a postgres sink need', 'what resource size', 'should I use kafka or dataset', 'how to structure my pipeline'. For writing transforms, use /turbo-transforms. For end-to-end building, use /turbo-builder.
38turbo-doctor
Diagnose and fix broken Goldsky Turbo pipelines interactively. Triggers on: pipeline in error state, stuck starting, connection refused, not getting data, duplicate rows, missing fields, slow backfill, or any named pipeline misbehaving. Runs logs/status commands, identifies root cause, and offers fixes. For CLI syntax or error pattern lookup without an active problem, use /turbo-operations instead.
34datasets
Use this skill when the user needs to look up or verify Goldsky blockchain dataset names, chain prefixes, dataset types, or versions. Triggers on questions like 'what\\'s the dataset name for X?', 'what prefix does Goldsky use for chain Y?', 'what version should I use for Z?', or 'what datasets are available for Solana/Stellar/Arbitrum/etc?'. Also use for chain-specific dataset questions (e.g., polygon vs matic prefix, stellarnet balance datasets, solana token transfer dataset names). Do NOT trigger for questions about CLI commands, pipeline setup, or general Goldsky architecture unless the core question is about finding the right dataset name or chain prefix.
33turbo-transforms
Write SQL, TypeScript, and dynamic table transforms for Turbo pipelines. Covers: decoding EVM logs with _gs_log_decode, filtering/casting blockchain data, UNION ALL for combining events, TypeScript/WASM transforms (invoke function), dynamic lookup tables (dynamic_table_check), transform chaining, and Solana decoding. Triggers on: 'decode Transfer events', 'write a SQL transform', 'filter by contract', 'TypeScript transform', 'dynamic table', 'UNION ALL'. For pipeline YAML structure, use /turbo-pipelines. For end-to-end building, use /turbo-builder.
32auth-setup
Set up Goldsky CLI authentication and project configuration. Use this skill when the user needs to: install the goldsky CLI (what's the official install command?), run goldsky login (including when the browser opens but 'authentication failed'), run goldsky project list and see 'not logged in' or 'unauthorized', switch between Goldsky projects, check which project they're currently authenticated to, or fix 'unauthorized' errors when running goldsky turbo commands. Also use for 'walk me through setting up goldsky CLI from scratch for the first time'. If any other Goldsky skill hits an auth error, redirect here first.
32