backend-ai-agent
Backend AI Agent Creation
Create AI agents using the Vercel AI SDK with proper patterns for tool use, tracing, and error handling.
Overview
The AI SDK supports:
- One-off calls:
generateTextandgenerateObjectfor single-purpose operations - Agentic workflows: Multi-turn conversations with tool use
Installation
npm install ai @ai-sdk/anthropic @ai-sdk/google @ai-sdk/openai zod
Setup
1. Model Providers (lib/ai/models.ts)
import { createAnthropic } from '@ai-sdk/anthropic';
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { createOpenAI } from '@ai-sdk/openai';
import config from '../../config';
export const gemini = createGoogleGenerativeAI({ apiKey: config.gemini.apiKey });
export const claude = createAnthropic({ apiKey: config.anthropic.apiKey });
export const openai = createOpenAI({ apiKey: config.openai.apiKey });
2. Model Constants (in @{project}/types)
export const LATEST_PRO_CLAUDE_MODEL = 'claude-sonnet-4-20250514';
export const LATEST_PRO_OPENAI_MODEL = 'gpt-4o-2025-04-15';
export const LATEST_PRO_GEMINI_MODEL = 'gemini-2.0-flash-exp';
export const CLAUDE_HAIKU_MODEL = 'claude-haiku-4.5-20250103';
export const GEMINI_FLASH_MODEL = 'gemini-2.0-flash-exp';
export type ModelProvider = 'openai' | 'claude' | 'gemini';
3. Config
Add to apps/backend/src/config/index.ts:
anthropic: { apiKey: process.env.ANTHROPIC_API_KEY || '' },
openai: { apiKey: process.env.OPENAI_API_KEY || '' },
gemini: { apiKey: process.env.GEMINI_API_KEY || '' },
One-Off LLM Calls
Text Generation
import { generateText } from 'ai';
import { gemini } from '../lib/ai/models';
import { GEMINI_FLASH_MODEL } from '@{project}/types';
export async function generateSummary(content: string): Promise<string | null> {
try {
const result = await generateText({
model: gemini(GEMINI_FLASH_MODEL),
prompt: `Summarize the following content:\n\n${content}`,
});
return result.text;
} catch (err) {
log.error({ err }, 'Error generating summary');
return null;
}
}
Structured Output
import { generateObject } from 'ai';
import { z } from 'zod/v4';
const TimeEstimateSchema = z.object({
hours: z.number().int().min(0).describe('Estimated hours'),
minutes: z.number().int().min(0).max(59).describe('Additional minutes'),
reasoning: z.string().describe('Explanation'),
});
const result = await generateObject({
model: gemini(GEMINI_FLASH_MODEL),
schema: TimeEstimateSchema,
prompt: `Estimate how long this task will take: ${task}`,
});
Agentic Workflows
See references/agent-patterns.md for complete examples including:
- Basic agent structure with conversation loop
- Tool creation patterns
- Model failover implementation
Model Selection Guide
| Use Case | Recommended Model |
|---|---|
| Simple categorization | Gemini Flash, Claude Haiku |
| Short summaries | Gemini Flash |
| Complex reasoning | Claude Sonnet, GPT-4o |
| Agentic workflows | Claude Sonnet, GPT-4o |
| Maximum quality | Claude Opus |
Best Practices
Prompt Structure
const prompt = `
You are an expert at [specific task].
<context>
${relevantContext}
</context>
<instructions>
1. [First instruction]
2. [Second instruction]
</instructions>
`;
Schema Design
Make schemas descriptive:
const schema = z.object({
category: z.enum(['urgent', 'normal', 'low'])
.describe('Priority level based on deadline and impact'),
confidence: z.number().min(0).max(1)
.describe('Confidence score from 0 to 1'),
});
Error Handling
Always handle errors gracefully with safe defaults:
try {
const result = await generateObject({ model, schema, prompt });
return result.object;
} catch (error) {
log.error({ error }, 'AI call failed');
return { category: 'normal', confidence: 0, reasoning: 'Error occurred' };
}
Token Limits
- One-off calls:
maxOutputTokens: 2048 - Agentic workflows:
maxOutputTokens: 8192 - Complex structured output:
maxOutputTokens: 16384
File Structure
apps/backend/src/lib/ai/
├── models.ts # Model provider instances
├── failover.ts # Failover utilities
├── agents/
│ └── MyAgent.ts # Agent implementations
└── tools/
└── myTools.ts # Tool definitions
Checklist
- Install dependencies:
npm install ai @ai-sdk/anthropic @ai-sdk/google @ai-sdk/openai - Create
lib/ai/models.tswith provider instances - Add model constants to
@{project}/types - Add API keys to config
- Create agent class with conversation loop
- Create tools with Zod parameters
- Implement failover for critical operations
- Test with realistic inputs
More from workshop-ventures/skills
frontend-scaffolding
Scaffold a React frontend with Tailwind CSS, React Router, React Query, and standard project structure. Use when asked to "create a frontend", "scaffold webapp", "set up React app", or "initialize frontend structure".
16new-project-scaffolding
Scaffold a new Nx monorepo project with backend, frontend, shared types library, justfile commands, and direnv setup. Use when starting a fresh project or asked to "create a new project", "scaffold a monorepo", or "set up a new workspace".
11backend-metrics
Add OpenTelemetry metrics and observability to the backend. Use when asked to "add metrics", "add observability", "track requests", or "add OpenTelemetry".
10frontend-hooks-creation
Create React Query hooks for API endpoints with proper typing and cache invalidation. Use when asked to "create hooks", "add frontend hooks", "create API hooks", or "add React Query hooks".
9backend-scaffolding
Scaffold a Koa-based backend server with standard structure including config, logging, routes, models, and database setup. Use when asked to "create a backend", "scaffold backend", "set up an API server", or "initialize backend structure".
9backend-ai-tools
Create AI tools for use with Vercel AI SDK agents. Use when asked to "create AI tools", "add agent tools", "create tool for AI", or "add tools to agent".
8