dibbla
Dibbla CLI
The dibbla CLI scaffolds projects and manages applications, databases, secrets, and workflows on the Dibbla platform. Deployed apps are available at https://<alias>.dibbla.com.
Prerequisites
Install the CLI if it isn't already on the user's PATH:
| Platform | Command |
|---|---|
| macOS (Homebrew) | brew install dibbla-agents/tap/dibbla |
| macOS / Linux (shell installer) | curl -fsSL https://install.dibbla.com/install.sh | sh |
| Windows (PowerShell) | powershell -NoProfile -ExecutionPolicy Bypass -Command "irm https://install.dibbla.com/install.ps1 | iex" |
| Verify | dibbla --version |
The shell installer drops the binary into ~/.local/bin and adjusts PATH if needed. Self-update is available inside task files via the same installer URL.
Deploying requires a Dockerfile at the root of the directory you pass to dibbla deploy. The CLI does not auto-detect languages or generate a Dockerfile — if it's missing, the backend rejects the build with log output. All bundled templates in dibbla-agents/dibbla-public-templates ship a working Dockerfile you can copy (typically multi-stage: Node → JS build → Go → binary → small runtime image, EXPOSE 80).
Commands at a glance
| Area | Commands |
|---|---|
| Run | run [path|url], run --preview, run --env KEY=VAL, run --env-file <file>, run --work-dir <dir>, run --format plain|gh |
| Template | template list [--refresh] [-v], template install <id> [<dir>] [--force] |
| Skills | skills list, skills install <id> (--user, --force, --no-agents) — install AI-agent guidance into .claude/skills/ + AGENTS.md + GEMINI.md |
| Setup | init (interactive setup wizard: update → login → install dibbla skill), update [--check] [--version vX.Y.Z] (self-update; defers to brew/apt/rpm/scoop/choco when one owns the binary), uninstall [--dry-run] [--keep-config] [--keep-skills] [--skill-only] (removes binary on script installs, keychain creds, ~/.config/dibbla/, ~/.dibbla/, and skill files at every recorded install root; for package-manager installs prints the native uninstall command instead of touching the binary) |
| Login | login [api_url], login --browser, login --api-key <token>, login --api-url <url>, login --write-env, login --no-keychain, logout |
| Feedback | feedback <message>, feedback list, feedback delete <id> |
| Deploy | deploy [path] -m "<msg>" [--alias name] [--update] [--require-login] [--access-policy] [--google-scopes] [--target-env <env>] [--profile <p>] — deploy from directory; -m becomes the VCS commit subject. --target-env / --profile / --no-public only apply when a dibbla.yaml is at the deploy root |
| Manifest | manifest validate [path] — local schema check for dibbla.yaml (no network) |
| Preview | preview [path] [--target-env <env>] [--profile <p>] [--no-public] — server-authoritative dry run; full env-aware resolution + quota check, no build, no apply |
| Apps | apps list, apps update <alias>, apps delete <alias>, apps restart <alias> --service <name> (per-service rolling restart) |
| Logs | logs <app> (last 15m), logs <app> --since 24h, logs <app> -f (follow), logs <app> -n 200 (tail), logs <app> --grep <regex>, logs <app> --json — runtime logs from Loki; logs <app> --service <name> filters to one service; logs <app> --service <name> --pod-stream streams pod logs via the K8s API when Loki isn't available |
| Db | db list, db create, db delete, db dump, db restore, db connect |
| Secrets | secrets list, secrets set, secrets get, secrets delete (global, -d <alias> for deployment-wide, or -d <alias> --service <name> for per-service) |
| Admin | admin reconcile — force one orphan-resource sweep on the deploy-api instance (gated by DIBBLA_ADMIN_TOKEN) |
| Workflows | workflows list, get, create, update, delete, validate, execute [--async|--follow], url, api-docs, logs <runId> [-f] |
| Runs | wf runs list [--workflow <name>] [--limit <N>], wf runs output <runId> — list past runs and fetch the api_response payload of a finished run |
| Nodes | nodes add <wf>, nodes remove <wf> <id> |
| Edges | edges add <wf> "<edge>", edges remove, edges list |
| Inputs | inputs set <wf> <node> <input> <value> |
| Tools | tools add <wf> <agent> <tool>, tools remove |
| Revisions | revisions list <wf>, revisions create, revisions restore |
| Functions | functions list, functions get <server> <name> |
Agent guidelines
Interactive prompts: The following commands prompt for confirmation and will block if run non-interactively. Always pass --yes (or -y) when running these as an agent:
dibbla apps delete <alias> --yesdibbla db delete <name> --yesdibbla secrets delete <name> --yesdibbla workflows delete <name> --yesdibbla nodes remove <wf> <id> --yesdibbla feedback delete <id> --yes
Deploying an app for the first time:
- Check if the app already exists:
dibbla apps list - If it does not exist, deploy with all required environment variables included in the deploy command — there is no app to attach them to yet:
dibbla deploy . --alias my-app -m "feat: initial deploy" \ -e DATABASE_URL=postgres://... -e API_KEY=secret -e NODE_ENV=production - If it already exists, use
--updatefor a zero-downtime rolling update:
To change env vars on an existing app, usedibbla deploy . --alias my-app -m "fix: resolve 500 on /search" --updateapps updateinstead:dibbla apps update my-app -e NEW_VAR=value
Key rules:
- Every
dibbla deploymust include-m "<message>". The value becomes the git commit subject in the app's Dibbla-managed VCS history (and on the GitHub mirror, if configured). Treat it like a git commit: present-tense imperative, under ~72 chars, covering what changed and why — e.g.-m "fix: handle null org in /api/me",-m "feat: add nightly db backup workflow",-m "chore: bump node to 20.14". For retries or mechanical redeploys, still say so explicitly:-m "redeploy: retry after CF 524". Max 500 chars. Never rundibbla deploywithout-m; a blank deploy history is a bug, not a default. --forcecauses downtime (tears down and redeploys). Prefer--updatefor existing apps.--forceand--updateare mutually exclusive.- Environment variables set via
deploy -eorapps update -epersist across updates — you only need to pass them once. - Login guard: Use
--require-loginto require authentication. Combine with--access-policy invite_onlyto restrict to invited users, orall_membersfor org-wide access. Use--google-scopesto request additional Google OAuth scopes (e.g. Drive, Calendar). - Use
--quiet/-qondb list,db delete,db connectfor machine-readable output in scripts. db create --deployment <alias>scopes the database and its auto-created secret to a specific deployment. The scoped secret is namedDATABASE_URL_<UPPERCASED_UNDERSCORED_NAME>(e.g.DATABASE_URL_MY_DBfor databasemy_db), not a plainDATABASE_URL— app code must read the suffixed env var.db connectprints a psql-compatible connection string via the Dibbla database proxy. Use-qfor scripting:psql $(dibbla db connect mydb -q).- 524 on deploy ≠ failure.
dibbla deployholds a single HTTP connection during the backend build; builds over ~100s may return a Cloudflare 524 on the client even when the backend succeeds. Wait 2–5 minutes, then rundibbla apps listto check. Do not retry with--force— use--updateif you must retry. - Output modes:
dibbla deploystreams a live buildkit-style step view when stdout is a TTY and switches to ISO-timestamped log lines (<ts> [info] build step=N/M …) when stdout is piped or in CI. Add--quietfor a single-line success/failure (script-friendly) or--jsonfor a single structured object on stdout. On build failure the non-TTY mode also writes one structured JSON line to stderr with shape{"event":"deploy.failed","step":"go-build","step_index":N,"step_count":M,"errors":[{file,line,col,message}],"retry_cmd":"…","api_error_code":"BUILD_FAILED"}— coding agents should read this from stderr to locate failing files without scraping the human-readable build output. Add--verbose-buildto ship the full server build log instead of the elided tail when parsed compile diagnostics aren't enough. Build failures exit2; other errors exit1. .dibblaignorecontrols Dibbla's managed VCS history, not what the Docker build sees. The backend always strips.env,node_modules/,dist/,*.pem,*.keyand similar from VCS and reports each hit inDeployResponse.vcs_filteredas a warning. Adding those paths (or any generated/large artifact) to.dibblaignoreat the deploy root silences the warning and keeps VCS clean. Per-file and per-commit size caps are hard rejections — committing a large build artifact will fail the deploy withErrCodeVCSFiltered; the fix is to add the path to.dibblaignore. Full details inreference.md→ deploy →.dibblaignore.- Managed Postgres uses a self-signed TLS cert. App clients (pg, psycopg2, Prisma) need explicit SSL handling — see
reference.md"TLS for application database clients" for working snippets.
Designing a multi-service manifest: Before authoring a dibbla.yaml, work through these design questions and confirm a plan with the user. Skipping this step at design time leads to retrofits that touch every consumer service (env vars, depends_on, service-discovery references), so it's worth the 60 seconds upfront.
- Which services should exist in only some envs? (e.g. an inline DB container in dev, a managed/external DB in prod) → put
profiles: [dev]on the env-specific service. Decide this upfront because consumers will need env-aware values for the URL/host that points to it. - Which fields differ across envs? (
replicas,image,MONGO_URL,LOG_LEVEL, …) → use env-aware field maps (§ 6 in manifest.md). Different mechanism from profiles: profiles toggle whether a service exists at all; env-aware fields shape an existing service. - Where will the data layer live in prod? If managed/external, the consumer needs an env-aware
MONGO_URL/DATABASE_URL(default:→ external value,dev:→${DIBBLA_SVC_*}) and the inline copy needsprofiles: [dev]. The two mechanisms are paired — see § 7 in manifest.md for a worked example. - How will the user iterate locally? The platform does not run
dibbla.yamllocally — there is nodibbla up. Mirror the manifest into adocker-compose.ymlnext to it for tight inner-loop dev (see examples.md "Local iteration with docker-compose"). The two diverge in details (no${DIBBLA_SVC_*}, no NetworkPolicy, no env-aware resolution) but match in shape. - Will any public service be sensitive in prod? (admin UIs, debug consoles, mail catchers, internal dashboards) → per-service
auth:block withrequire_login: trueand anaccess_policy:, or gate the whole service withprofiles: [dev]. Shipping an admin UI publicly without auth is a top OWASP-class mistake; the guardrails check enforces this in guardrails.md.
Pre-deploy guardrails: Before calling dibbla deploy, you MUST complete the pre-deploy checklist and present findings to the user. Always wait for explicit user confirmation before deploying or fixing issues — never deploy autonomously. The guardrails workflow also writes a REVIEW.md file to the project root — the platform reads this and displays a review status indicator in the dashboard. See guardrails.md for the full checklist.
Workflows: A workflow is a typed DAG of function calls — nodes name a function from the registry, edges carry data port-to-port, an api node + api_response node make it callable over HTTP. Author in slim YAML (the format wf get/wf create -f consume); never hand-write the verbose React-Flow JSON. Minimal shape:
name: my_workflow
nodes:
- {id: api_input, type: api, inputs: [question], outputs: [question]}
- {id: greet, type: function, function: handlebars_template,
server: go-function-server1, inputs: {script: "Hello {{question}}!"},
outputs: [error, output]}
- {id: api_response, type: api_response, linked_to: api_input, inputs: [response]}
edges:
- api_input.question -> greet.question
- greet.output -> api_response.response
Before authoring anything non-trivial, run dibbla fn list to see what functions exist and dibbla wf get <existing> -o yaml on a similar workflow for shape — the function registry, not the YAML, is the source of truth. Pick the iteration loop that matches the change size: small tweak → patch HEAD with nodes add/edges add/inputs set/tools add; structural change → wf get … -o yaml → edit → wf update -f. Always dibbla revisions create <wf> before either; patches are not auto-snapshotted and revisions restore overwrites HEAD (it's not a checkout). For the complete model — node-type roles, the agent+tool pattern, all 13 validator errors and their fixes, execution/HTTP semantics, and the three canonical workflow shapes (transform, agent+tools, multi-stage pipeline) — see workflows.md.
Workflow gotchas that bite once:
- Pick
reasoning_agent_functionfor new agents —reasoning_agent_with_threadhas been observed to silently return empty responses with current Claude models. Always wireagent.error -> api_response.errorso silent failures surface. - Production callers must use the gateway URL, not the URL
wf api-docsprints. Rewrite host:https://workflow-server.dibbla.net/api/execute/<name>/<urlid>(shown byapi-docs, internal only) →https://api.dibbla.net/api/wf/execute/<name>/<urlid>(gateway, acceptsAuthorization: Bearer ak_<workflow-api-key>). - After many
wf updateiterations, recreate the workflow before shipping — the<urlid>in the gateway URL can go silently stale (calls hang for ~5 minutes with no error).wf delete --yes && wf creategives a fresh id. - Node ids collapse to the function name on
wf create. Don't pick custom ids; refer to tools by function name. - Result cache is 1 hour on
reasoning_agent_function. During iterative testing, vary the input or use a*_no_cachevariant. - Always wrap workflow fetches in an
AbortControllerwith a 30–60s timeout and log before/after — Node's default 5-minute timeout makes failures look like hangs.
Run monitoring & async execution:
dibbla wf executeis synchronous by default — it blocks until the workflow'sapi_responsenode fires (server-side timeout: 30 min). For long-running agent workflows or fire-and-forget batches, use--asyncto get backresponse_metadataimmediately while the run continues in background. Tail it later withwf logs <runId> --followand fetch the final output withwf runs output <runId>.dibbla wf execute --follow(-f) is the one-liner for interactive debugging: starts the run async, tails live logs to stdout, then prints the api_response payload after the server-emittedrun_completedsentinel. Exits 0 on completion.dibbla wf logs <runId>works on any run. Live runs stream until completion; finished runs return historic + sentinel and exit immediately. Persistence policy: WARN/ERROR + therun_completedrow are persisted; INFO/DEBUG are live-only — a quiet completed run will tail to essentially justrun completed. For the full transcript of a finished run, usewf runs output <runId>instead.dibbla wf runs list(-w <name>to filter,-n <N>to page; server caps at 500) is the way to find a recent run id without copy-pasting from the dashboard or the DB.- Short flag
-fdiffers by command: ondibbla logs(app-logs) anddibbla wf logs,-fis--follow. Ondibbla wf execute,-fis also--follow— but--filehad to give up its short alias and uses-Finstead. Don't suggest-f payload.jsonforwf execute; use--file payload.jsonor-F payload.json.
Building Go workers (sdk-go): The github.com/dibbla-agents/sdk-go Go SDK is how workers register custom functions and jobs with the platform. A worker is a long-lived gRPC client: sdk.New(...) → server.RegisterFunction(...) and server.RegisterJob(...) → server.Start() (which blocks forever). External user modules are restricted to sdk.NewSimpleFunction[In, Out] because the advanced Function[In, Out] handler signature exposes internal/types and internal/state — Go's internal/ rule blocks those imports from any module other than sdk-go. The JobHost abstraction was removed; jobs register directly via server.RegisterJob(handler). Once the worker is connected, its functions appear in dibbla functions list and become callable from workflow YAML by (server, function) pair (see workflows.md for consumer-side wiring). For the full SDK model — server options, function builders, the JobHandler interface, JobContext arg helpers, the Logger task/progress API, OAuth via gs.OAuth, and gotchas — see sdk-go.md.
Non-TTY / agentic invocation:
- When running from inside Claude Code's
!prefix, an agent shell, CI with a browser, or any other non-TTY context, usedibbla login --browserinstead of baredibbla login. The interactive flow needs stdin for the survey picker;--browserskips that and goes straight to browser-based OAuth via a localhost callback. - For true headless (SSH sessions, cloud VMs, CI runners with no local browser), use
dibbla login --api-key <token>or setDIBBLA_API_TOKEN(and optionallyDIBBLA_API_URL) env vars — the CLI reads env vars in CI automatically. - Cloud VMs / SSH / Docker (no keyring):
dibbla login --api-key=<t> --api-url=<url> --write-env --no-keychainvalidates the token against the API and writesDIBBLA_API_TOKEN+DIBBLA_API_URLto./.env(patching.gitignoreif needed), without touching the OS keyring. Use this on fresh Ubuntu/EC2/GCE/Docker images where libsecret/gnome-keyring/pass isn't installed. Every subsequentdibbla *command in that directory reads credentials from.env. Requires CLI ≥ v1.2.4. .envin CWD is read by every command, includinglogin. PutDIBBLA_API_TOKEN=…andDIBBLA_API_URL=https://api.dibbla.netin./.envand everydibblainvocation from that directory targets that server and token — nologincall needed. Shell-exported vars still win over.env(godotenv does not overwrite). Requires CLI ≥ v1.2.4.DIBBLA_AUTH_SERVICE_URLis an internal compat alias forDIBBLA_API_URL, injected by the steprunner into child processes launched bydibbla run. Users should putDIBBLA_API_URLin.env;DIBBLA_AUTH_SERVICE_URLexists so child processes see the same server via the desktop/steprunner convention name.
Running task files and templates:
dibbla run <path>executes adibbla-task.yamlpipeline locally. Tool checks, shell commands, background dev servers, and browser-open side effects are all possible — the task file becomes shell under the user's account.dibbla run <https-url>fetches and executes a yaml from the network. This is equivalent tocurl | bash— only run yamls from sources the user trusts (e.g.github.com/dibbla-agents/*). Work-dir defaults to the user's invocation CWD, so bootstrap clones land in the expected directory rather than in a temp dir.dibbla template install <id>is ergonomic sugar overmkdir ./<template-path> && cd ./<template-path> && dibbla run <bootstrap-url>. It refuses if the destination directory exists; pass--forceto reuse. Usedibbla template listto see available ids.- Prefer
dibbla run --previewordibbla template listbefore actually running, so the user can see what will execute.
Installing this skill into a project (so other agents see it too):
dibbla skills install dibblawrites the skill files into./.claude/skills/dibbla/plusAGENTS.mdandGEMINI.mdpointers at the project root. Every major coding agent then picks up the guidance automatically — Claude Code via its native skill path, Cursor/Opencode/Codex/Copilot/Windsurf/Aider viaAGENTS.md(the 2026 open standard), Gemini CLI viaGEMINI.md.- The skill content is embedded in the CLI binary (
go:embed), so no network is required and the skill version is locked to the CLI version the user has installed. Rundibbla --versionto see which one. - Flags:
--userinstalls into$HOMEfor machine-wide coverage instead of the current directory;--no-agentsskipsAGENTS.mdandGEMINI.md(Claude Code only);--forceoverwrites skill files that have been edited locally. Unknown files inside.claude/skills/<id>/are always preserved. - The AGENTS.md / GEMINI.md pointer block is marker-delimited (
<!-- >>> dibbla skill >>> -->…<!-- <<< dibbla skill <<< -->) so existing AGENTS.md content outside the markers is preserved byte-for-byte across reruns. - Re-running is idempotent — if nothing changed, nothing is rewritten (no mtime bump). Use
dibbla skills listto see what skills the current CLI ships.
Additional resources
- Platform compatibility: see platform.md for the Dockerfile contract, port-matching, runtime environment, managed Postgres TLS handling, secrets/env-var injection, the auth-header contract (
X-User-*headers and Google OAuth scope brokering), the upload boundary, the multi-service runtime contract (§ 8.5), and the pre-deploy compatibility checklist. Read this when working in a Dibbla-connected project on Dockerfile,.dibblaignore, auth integration, or deploy-readiness questions. - Multi-service manifest schema: see manifest.md for the full
dibbla.yamlschema — services, jobs, env-aware fields, profiles, service discovery (DIBBLA_SVC_*),expose_to/NetworkPolicy, volumes, init containers, healthchecks, multiple public services, custom domains, cron, build-time secrets, quotas, error codes, and a worked end-to-end example. Read this whenever the user is authoring or reviewing a manifest, or asking a "how do I run X alongside Y in one deploy" question. - Workflows: see workflows.md for the complete workflow model — slim YAML format, the three node types and the roles
functionplays (agent / tool / script / data fetcher), the agent+tool wiring pattern, all 13 validator errors with fixes, edges and data flow, the functions registry, the three idiomatic authoring loops, revision semantics, HTTP execution, the canonical workflow shapes, the pre-flight checklist, and footguns. Read this whenever the user asks anything that touchesdibbla wf/nodes/edges/inputs/tools/revisions/functions. - Go SDK: see sdk-go.md for the Dibbla Go SDK —
sdk.Newserver bootstrap and options,SimpleFunctionvs advancedFunction[In, Out], theJobHandlerinterface andserver.RegisterJob(the oldJobHostis gone), theLoggertask/progress API, OAuth-on-behalf-of-user viags.OAuth, TLS auto-detection, theinternal/import footgun, and end-to-end deploy. Read this whenever the user is implementing a Dibbla function or job in Go. - Full command and flag reference: see reference.md for usage, arguments, and all flags.
- Usage examples: see examples.md for copy-paste examples and scripting patterns.
- Pre-deploy guardrails: see guardrails.md for the mandatory pre-deploy security checklist (Checks 1–4 always; Check 5 for URL-fetched task files; Check 6 for multi-service manifests).
When suggesting or generating dibbla commands, use the reference for exact syntax and the examples for typical workflows. For "is my app ready for Dibbla?" questions, start in platform.md. For "build / iterate / debug a workflow" questions, start in workflows.md. For "implement a Go function/job for the platform" questions, start in sdk-go.md.