render-background-workers
Render Background Workers
This skill explains worker services on Render: processes that consume jobs from a queue instead of serving HTTP. Pair with render-blueprints, render-env-vars, and render-networking when wiring render.yaml and private connectivity.
When to Use
- Designing or debugging queue-backed workers (Celery, Sidekiq, BullMQ, Asynq, etc.)
- Choosing between a worker, Cron Job, or Workflow for background work
- Configuring Render Key Value as a broker (not a cache) with correct eviction policy
- Implementing graceful shutdown so in-flight jobs are not lost on deploy
Per-framework setup and signal-handling detail: references/queue-framework-setup.md, references/graceful-shutdown.md.
How Workers Work
- Long-running services with no inbound (HTTP) traffic. Render does not expose a public URL or internal hostname for workers the way it does for web or private services—workers cannot receive private network traffic directed at them.
- The typical pattern is a poll loop: the process connects to a queue backend (often Render Key Value, Redis-compatible Valkey 8) and pulls jobs.
- Workers can initiate outbound connections on the private network—to PostgreSQL, Key Value, private services, web services (internal URLs), and the public internet—subject to your plan and firewall rules.
Queue Framework Overview
| Framework | Language | Queue backend | Notes |
|---|---|---|---|
| Celery | Python | Redis / Key Value | Most common Python task queue |
| Sidekiq | Ruby | Redis / Key Value | Standard for Rails |
| BullMQ | Node.js | Redis / Key Value | Modern Node queue (Redis-based) |
| Asynq | Go | Redis / Key Value | Go async task processing |
| Oban | Elixir | Postgres (not Redis) | Queue stored in the database |
Pairing with Key Value
- Use Render Key Value as the job broker when your framework expects Redis.
- Set maxmemory policy to
noeviction.allkeys-lruand similar policies are for caches; evicting queue keys drops jobs. - Wire
REDIS_URL(or your framework’s equivalent) viafromServicewithtype: keyvalueandproperty: connectionStringin the Blueprint. - Blueprints require
ipAllowListon Key Value—include the CIDRs that should reach the instance (often[]for private-network-only access; see render-blueprints / Key Value field reference).
See references/queue-framework-setup.md for minimal app + YAML examples.
Worker vs Cron vs Workflow
| Need | Use | Why |
|---|---|---|
| Always-on queue consumer | Background Worker | Polls continuously; long-lived process |
| Periodic scheduled task | Cron Job | Runs on a schedule, exits; 12h max per run |
| Distributed parallel compute | Workflow | Each run gets its own instance; fan-out patterns |
| High-volume or bursty jobs | Workflow | Scales per run; no idle instance cost between runs |
Graceful Shutdown
- Before stopping an instance, Render sends
SIGTERM, then waits up tomaxShutdownDelaySeconds(1–300, default 30) beforeSIGKILL. - Workers should: (1) stop accepting new jobs, (2) finish the current job or checkpoint progress, (3) close connections, (4) exit 0.
- Set
maxShutdownDelaySecondsto at least your longest safe job duration (see Dashboard or Blueprint).
Language- and framework-specific handlers: references/graceful-shutdown.md.
Blueprint Configuration
Minimal pattern: type: worker, runtime, buildCommand, startCommand, and envVars wired from Key Value.
services:
- type: keyvalue
name: jobs
plan: starter
region: oregon
ipAllowList: []
- type: worker
name: task-worker
runtime: python
region: oregon
plan: starter
buildCommand: pip install -r requirements.txt
startCommand: celery -A tasks worker --loglevel=info
envVars:
- key: REDIS_URL
fromService:
name: jobs
type: keyvalue
property: connectionString
Optional: maxShutdownDelaySeconds on the worker service for longer draining jobs.
References
| Topic | File |
|---|---|
| Celery, Sidekiq, BullMQ, Asynq, Oban setup + YAML | references/queue-framework-setup.md |
SIGTERM, maxShutdownDelaySeconds, per-language patterns |
references/graceful-shutdown.md |
Related Skills
- render-deploy — First deploy, CLI, service creation
- render-blueprints — Full
render.yamlschema,fromService, projects - render-networking — Private URLs, what can call what
- render-scaling — Worker plans, instance counts, limits
More from render-oss/skills
render-deploy
Deploy applications to Render by analyzing codebases, generating render.yaml Blueprints, and providing Dashboard deeplinks. Use when the user wants to deploy, host, publish, or set up their application on Render's cloud platform.
58render-debug
Debug failed Render deployments by analyzing logs, metrics, and database state. Identifies errors (missing env vars, port binding, OOM, etc.) and suggests fixes. Use when deployments fail, services won't start, or users mention errors, logs, or debugging.
46render-monitor
Monitor Render services in real-time. Check health, performance metrics, logs, and resource usage. Use when users want to check service status, view metrics, monitor performance, or verify deployments are healthy.
45render-workflows
Sets up, develops, tests, and deploys Render Workflows. Covers first-time scaffolding (via CLI or manual), SDK installation (Python or TypeScript), task patterns (retries, subtasks, fan-out), local development, Dashboard deployment, and troubleshooting. Use when a user wants to set up Render Workflows for the first time, scaffold a workflow service, add or modify workflow tasks, test workflows locally, or deploy workflows to Render.
34render-migrate-from-heroku
Migrate from Heroku to Render by reading local project files and generating equivalent Render services. Triggers: any mention of migrating from Heroku, moving off Heroku, Heroku to Render migration, or switching from Heroku. Reads Procfile, dependency files, and app config from the local repo. Optionally uses Heroku MCP to enrich with live config vars, add-on details, and dyno sizes. Uses Render MCP or Blueprint YAML to create services.
27render-networking
>-
13