weave-integration
SKILL.md
Weave Integration
Add W&B Weave observability to any LLM project. Weave traces all LLM calls, captures tokens/latency/costs, and provides a UI for debugging and evaluation.
Quick Start
1. Install
# TypeScript/Node.js
npm install weave
# Python
pip install weave
2. Get API Key
Set WANDB_API_KEY environment variable. Get key from wandb.ai/settings.
export WANDB_API_KEY="your-key-here"
3. Initialize
// TypeScript
import * as weave from 'weave';
await weave.init('your-team/project-name');
# Python
import weave
weave.init('your-team/project-name')
4. Trace LLM Calls
Auto-patching (supported providers traced automatically):
// TypeScript - CommonJS: works out of the box
import OpenAI from 'openai';
import * as weave from 'weave';
await weave.init('my-project');
const client = new OpenAI();
// All calls now traced automatically
Manual wrapping (for custom functions or unsupported libs):
// TypeScript
const myFunction = weave.op(async (input: string) => {
// your code here
return result;
});
# Python
@weave.op()
def my_function(input: str):
return result
TypeScript Setup Details
See references/typescript.md for:
- ESM configuration (
--import=weave/instrument) - Bundler compatibility (Next.js, Vite)
- Manual patching fallback
Supported Providers (Auto-traced)
OpenAI, Anthropic, Cohere, Mistral, Google, Groq, Together AI, LiteLLM, Azure, Bedrock, Cerebras, HuggingFace, OpenRouter, NVIDIA NIM, and more.
Full list: https://docs.wandb.ai/weave/guides/integrations
Integration Workflow
When adding Weave to a project:
- Find LLM call sites — search for OpenAI/Anthropic client usage
- Add weave.init() — early in app startup, before any LLM calls
- Verify auto-patching — check traces appear in W&B UI
- Wrap custom functions — use
weave.op()for additional visibility - Add cost tracking — Weave tracks tokens automatically for supported providers
Viewing Traces
After running your app:
- Open wandb.ai → Your project → Weave tab
- See all traces with inputs, outputs, latency, token usage, costs
- Filter, search, and export call data
Environment Variables
| Variable | Purpose |
|---|---|
WANDB_API_KEY |
Authentication (required) |
WEAVE_IMPLICITLY_PATCH_INTEGRATIONS |
Set false to disable auto-patching |
Common Patterns
Wrap Existing Client
import { wrapOpenAI } from 'weave';
import OpenAI from 'openai';
const client = wrapOpenAI(new OpenAI());
Trace Class Methods
class MyAgent {
@weave.op
async predict(prompt: string) {
return "response";
}
}
Add Display Names
const myOp = weave.op(myFunction, {
callDisplayName: (input) => `Custom Name: ${input}`
});
Clawdbot-Specific Integration
For Clawdbot/similar Node.js agents:
- Locate the LLM client initialization (usually Anthropic/OpenAI SDK)
- Add
weave.init()in the main entry point - For ESM, add
--import=weave/instrumentto node invocation - All provider calls will be traced to W&B
Troubleshooting
- No traces appearing: Check
WANDB_API_KEYis set - ESM not patching: Use
--import=weave/instrumentflag - Bundler issues: Mark LLM libs as external in config
- Manual fallback: Use
wrapOpenAI()or explicitweave.op()wrappers
Weekly Installs
8
Repository
altryne/weavify-skillGitHub Stars
1
First Seen
Jan 26, 2026
Security Audits
Installed on
claude-code7
cursor7
gemini-cli4
github-copilot4
windsurf4
opencode4