openserv-agent-sdk
OpenServ Agent SDK
Build and deploy custom AI agents for the OpenServ platform using TypeScript.
Why build an agent?
An OpenServ agent is a service that runs your code and exposes it on the OpenServ platform—so it can be triggered by workflows, other agents, or paid calls (e.g. x402). The platform sends tasks to your agent; your agent runs your capabilities (APIs, tools, file handling) and returns results. You don't have to use an LLM—e.g. it could be a static API that just returns data. If you need LLM reasoning, you have two options: use runless capabilities (the platform handles the AI call for you—no API key needed) or use generate() (delegates the LLM call to the platform); alternatively, bring your own LLM (any provider you have access to).
How it works (the flow)
- Define your agent — System prompt plus capabilities. Capabilities come in two flavors: runnable (with a Zod schema and a
runhandler) and runless (just a name and description—the platform handles the AI call automatically). You can also usegenerate()inside runnable capabilities to delegate LLM calls to the platform. - Register with the platform — You need an account on the platform; often the easiest way is to let
provision()create one for you automatically by creating a wallet and signing up with it (that account is reused on later runs). Callprovision()(from@openserv-labs/client): it creates or reuses a wallet, registers the agent, and writes API key and auth token into your env (or you passagent.instanceto bind them directly). In development you can skip setting an endpoint URL; the SDK can use a built-in tunnel to the platform. - Start the agent — Call
run(agent). The agent listens for tasks, runs your capabilities (and your LLM if you use one), and responds. Usereference.mdandtroubleshooting.mdfor details;examples/has full runnable code.
What your agent can do
- Runless Capabilities — Just a name and description. The platform handles the AI call automatically—no API key, no
run()function needed. Optionally defineinputSchemaandoutputSchemafor structured I/O. - Runnable Capabilities — The tools your agent can run (e.g. search, transform data, call APIs). Each has a name, description,
inputSchema, andrun()function. generate()method — Delegate LLM calls to the platform from inside any runnable capability. No API key needed—the platform performs the call and records usage. Supports text and structured output.- Task context — When running in a task, the agent can attach logs and uploads to that task via methods like
addLogToTask()anduploadFile(). - Multi-agent workflows — Your agent can be part of workflows with other agents; see the openserv-client skill for the Platform API, workflows, and ERC-8004 on-chain identity.
Reference: reference.md (patterns) · troubleshooting.md (common issues) · examples/ (full examples)
Quick Start
Installation
npm install @openserv-labs/sdk @openserv-labs/client zod
Note:
openaiis only needed if you use theprocess()method for direct OpenAI calls. Most agents don't need it—use runless capabilities orgenerate()instead.
Minimal Agent
See examples/basic-agent.ts for a complete runnable example.
The pattern is simple:
- Create an
Agentwith a system prompt - Add capabilities with
agent.addCapability() - Call
provision()to register on the platform (passagent.instanceto bind credentials) - Call
run(agent)to start
Complete Agent Template
File Structure
my-agent/
├── src/agent.ts
├── .env
├── .gitignore
├── package.json
└── tsconfig.json
Dependencies
npm init -y && npm pkg set type=module
npm i @openserv-labs/sdk @openserv-labs/client dotenv zod
npm i -D @types/node tsx typescript
Note: The project must use
"type": "module"inpackage.json. Add a"dev": "tsx src/agent.ts"script for local development. Only installopenaiif you use theprocess()method for direct OpenAI calls.
.env
Most agents don't need any LLM API key—use runless capabilities or generate() and the platform handles LLM calls for you. If you use process() for direct OpenAI calls, set OPENAI_API_KEY. The rest is filled by provision().
# Only needed if you use process() for direct OpenAI calls:
# OPENAI_API_KEY=your-openai-key
# ANTHROPIC_API_KEY=your_anthropic_key # If using Claude directly
# Auto-populated by provision():
WALLET_PRIVATE_KEY=
OPENSERV_API_KEY=
OPENSERV_AUTH_TOKEN=
PORT=7378
# Production: skip tunnel and run HTTP server only
# DISABLE_TUNNEL=true
# Force tunnel even when endpointUrl is set
# FORCE_TUNNEL=true
Capabilities
Capabilities come in two flavors:
Runless Capabilities (recommended for most use cases)
Runless capabilities don't need a run function—the platform handles the AI call automatically. Just provide a name and description:
// Simplest form — just name + description
agent.addCapability({
name: 'generate_haiku',
description: 'Generate a haiku poem (5-7-5 syllables) about the given input.'
})
// With custom input schema
agent.addCapability({
name: 'translate',
description: 'Translate text to the target language.',
inputSchema: z.object({
text: z.string(),
targetLanguage: z.string()
})
})
// With structured output
agent.addCapability({
name: 'analyze_sentiment',
description: 'Analyze the sentiment of the given text.',
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1)
})
})
- No
runfunction — the platform performs the LLM call - No API key needed — the platform handles it
inputSchemais optional — defaults toz.object({ input: z.string() })if omittedoutputSchemais optional — define it for structured output from the platform
See examples/haiku-poet-agent.ts for a complete runless example.
Runnable Capabilities
Runnable capabilities have a run function for custom logic. Each requires:
name- Unique identifierdescription- What it does (helps AI decide when to use it)inputSchema- Zod schema defining parametersrun- Function returning a string
agent.addCapability({
name: 'greet',
description: 'Greet a user by name',
inputSchema: z.object({ name: z.string() }),
async run({ args }) {
return `Hello, ${args.name}!`
}
})
See examples/capability-example.ts for basic capabilities.
Note: The
schemaproperty still works as an alias forinputSchemabut is deprecated. UseinputSchemafor new code.
Using Agent Methods
Access this in capabilities to use agent methods like addLogToTask(), uploadFile(), generate(), etc.
See examples/capability-with-agent-methods.ts for logging and file upload patterns.
Agent Methods
generate() — Platform-Delegated LLM Calls
The generate() method lets you make LLM calls without any API key. The platform performs the call and records usage to the workspace.
// Text generation
const poem = await this.generate({
prompt: `Write a short poem about ${args.topic}`,
action
})
// Structured output (returns validated object matching the schema)
const metadata = await this.generate({
prompt: `Suggest a title and 3 tags for: ${poem}`,
outputSchema: z.object({
title: z.string(),
tags: z.array(z.string()).length(3)
}),
action
})
// With conversation history
const followUp = await this.generate({
prompt: 'Suggest a related topic.',
messages, // conversation history from run function
action
})
Parameters:
prompt(string) — The prompt for the LLMaction(ActionSchema) — The action context (passed into yourrunfunction)outputSchema(Zod schema, optional) — When provided, returns a validated structured outputmessages(array, optional) — Conversation history for multi-turn generation
The action parameter is required because it identifies the workspace/task for billing. Use it inside runnable capabilities where action is available from the run function arguments.
Task Management
await agent.createTask({ workspaceId, assignee, description, body, input, dependencies })
await agent.updateTaskStatus({ workspaceId, taskId, status: 'in-progress' })
await agent.addLogToTask({ workspaceId, taskId, severity: 'info', type: 'text', body: '...' })
await agent.markTaskAsErrored({ workspaceId, taskId, error: 'Something went wrong' })
const task = await agent.getTaskDetail({ workspaceId, taskId })
const tasks = await agent.getTasks({ workspaceId })
File Operations
const files = await agent.getFiles({ workspaceId })
await agent.uploadFile({ workspaceId, path: 'output.txt', file: 'content', taskIds: [taskId] })
await agent.deleteFile({ workspaceId, fileId })
Action Context
The action parameter in capabilities is a union type — task only exists on the 'do-task' variant. Always narrow with a type guard before accessing action.task:
async run({ args, action }) {
// action.task does NOT exist on all action types — you must narrow first
if (action?.type === 'do-task' && action.task) {
const { workspace, task } = action
workspace.id // Workspace ID
workspace.goal // Workspace goal
task.id // Task ID
task.description // Task description
task.input // Task input
action.me.id // Current agent ID
}
}
Do not extract action?.task?.id before the type guard — TypeScript will error with Property 'task' does not exist on type 'ActionSchema'.
Workflow Name & Goal
The workflow object in provision() requires two important properties:
name(string) - This becomes the agent name in ERC-8004. Make it polished, punchy, and memorable — this is the public-facing brand name users see. Think product launch, not variable name. Examples:'Crypto Alpha Scanner','AI Video Studio','Instant Blog Machine'.goal(string, required) - A detailed description of what the workflow accomplishes. Must be descriptive and thorough — short or vague goals will cause API calls to fail. Write at least a full sentence explaining the workflow's purpose.
workflow: {
name: 'Haiku Poetry Generator', // Polished display name — the ERC-8004 agent name users see
goal: 'Transform any theme or emotion into a beautiful traditional 5-7-5 haiku poem using AI',
trigger: triggers.x402({ ... }),
task: { description: 'Generate a haiku about the given topic' }
}
Trigger Types
import { triggers } from '@openserv-labs/client'
triggers.webhook({ waitForCompletion: true, timeout: 600 })
triggers.x402({ name: '...', description: '...', price: '0.01', timeout: 600 })
triggers.cron({ schedule: '0 9 * * *' })
triggers.manual()
Important: Always set
timeoutto at least 600 seconds (10 minutes) for webhook and x402 triggers. Agents often take significant time to process requests — especially when performing research, content generation, or other complex tasks. A low timeout will cause premature failures. For multi-agent pipelines with many sequential steps, consider 900 seconds or more.
API Keys: Agent vs User
provision() creates two types of credentials. They are not interchangeable:
OPENSERV_API_KEY(Agent API key) — Used internally by the SDK to authenticate when receiving tasks. Set automatically byprovision()when you passagent.instance. Do not use this key withPlatformClient.WALLET_PRIVATE_KEY/OPENSERV_USER_API_KEY(User credentials) — Used withPlatformClientto make management calls (list tasks, debug workflows, etc.). Authenticate withclient.authenticate(walletKey)or passapiKeyto the constructor.
If you need to debug tasks or inspect workflows, use wallet authentication:
const client = new PlatformClient()
await client.authenticate(process.env.WALLET_PRIVATE_KEY)
const tasks = await client.tasks.list({ workflowId: result.workflowId })
See troubleshooting.md for details on 401 errors.
Deployment
Local Development
npm run dev
The run() function automatically:
- Starts the agent HTTP server (port 7378, with automatic fallback)
- Connects via WebSocket to
agents-proxy.openserv.ai - Routes platform requests to your local machine
No need for ngrok or other tunneling tools - run() handles this seamlessly. Just call run(agent) and your local agent is accessible to the platform.
Production
When deploying to a hosting provider like Cloud Run, set DISABLE_TUNNEL=true as an environment variable. This makes run() start only the HTTP server without opening a WebSocket tunnel — the platform reaches your agent directly at its public URL.
await provision({
agent: {
name: 'my-agent',
description: '...',
endpointUrl: 'https://my-agent.example.com' // Required for production
},
workflow: {
name: 'Lightning Service Pro',
goal: 'Describe in detail what this workflow does — be thorough, vague goals cause failures',
trigger: triggers.webhook({ waitForCompletion: true, timeout: 600 }),
task: { description: 'Process incoming requests' }
}
})
// With DISABLE_TUNNEL=true, run() starts only the HTTP server (no tunnel)
await run(agent)
ERC-8004: On-Chain Agent Identity
After provisioning, register your agent on-chain for discoverability via the Identity Registry.
Requires ETH on Base. Registration calls
register()on the ERC-8004 contract on Base mainnet (chain 8453), which costs gas. The wallet created byprovision()starts with a zero balance. Fund it with a small amount of ETH on Base before the first registration attempt. The wallet address is logged during provisioning (Created new wallet: 0x...).
Always wrap in try/catch so a registration failure (e.g. unfunded wallet) doesn't prevent
run(agent)from starting.
Two important patterns:
- Use
dotenvprogrammatically (notimport 'dotenv/config') so you can reload.envafterprovision()writesWALLET_PRIVATE_KEY. - Call
dotenv.config({ override: true })afterprovision()to pick up the freshly written key before ERC-8004 registration.
import dotenv from 'dotenv'
dotenv.config()
import { Agent, run } from '@openserv-labs/sdk'
import { provision, triggers, PlatformClient } from '@openserv-labs/client'
// ... define agent and capabilities ...
const result = await provision({
agent: { instance: agent, name: 'my-agent', description: '...' },
workflow: {
name: 'My Service',
goal: 'Detailed description of what the workflow does',
trigger: triggers.x402({ name: 'My Service', description: '...', price: '0.01', timeout: 600 }),
task: { description: 'Process requests' }
}
})
// Reload .env to pick up WALLET_PRIVATE_KEY written by provision()
dotenv.config({ override: true })
// Register on-chain (non-blocking — requires funded wallet on Base)
try {
const client = new PlatformClient()
await client.authenticate(process.env.WALLET_PRIVATE_KEY)
const erc8004 = await client.erc8004.registerOnChain({
workflowId: result.workflowId,
privateKey: process.env.WALLET_PRIVATE_KEY!,
name: 'My Service',
description: 'What this agent does'
})
console.log(`Agent ID: ${erc8004.agentId}`) // "8453:42"
console.log(`TX: ${erc8004.blockExplorerUrl}`)
console.log(`Scan: ${erc8004.scanUrl}`) // "https://www.8004scan.io/agents/base/42"
} catch (error) {
console.warn('ERC-8004 registration skipped:', error instanceof Error ? error.message : error)
}
await run(agent)
- First run mints a new identity NFT. Re-runs update the URI — agent ID stays the same.
- Never clear the wallet state unless you intentionally want a new agent ID. To update metadata, just re-run.
- Default chain: Base mainnet (8453). Pass
chainId/rpcUrlfor others.
See openserv-client skill for the full ERC-8004 API reference and troubleshooting.
DO NOT USE
this.process()inside capabilities — Legacy method requiring an OpenAI API key. Usethis.generate()instead (platform-delegated, no key needed), or use runless capabilitiesdoTaskoverride — The SDK handles task execution automaticallythis.completeTask()— Task completion is handled by the Runtime API
Updating This Skill
This skill may be outdated. The OpenServ team ships updates regularly—new features, API changes, and bug fixes won't be reflected here until you update. If something isn't working as expected, or you want to make sure you have the latest information, update the skill before proceeding.
# Check if updates are available
npx skills check
# Update all installed skills to latest versions
npx skills update
Or reinstall the OpenServ skills directly:
npx skills add openserv-labs/skills
Related Skills
- openserv-client - Full Platform Client API reference
- openserv-multi-agent-workflows - Multi-agent collaboration patterns
- openserv-launch - Launch tokens on Base blockchain
- openserv-ideaboard-api - Find ideas and ship agent services on the Ideaboard