ai-sdk-core
AI SDK Core
Use AI SDK Core to generate text/structured output, call tools, and connect to MCP servers with consistent APIs across providers.
Quick Start
pnpm add ai @ai-sdk/openai zod@^4.3.5
import { generateText } from 'ai';
const { text } = await generateText({
model: 'openai/gpt-4o',
prompt: 'Explain quantum computing in one paragraph.',
});
Function Selection
| Need | Function | Streaming |
|---|---|---|
| Text response | generateText |
No |
| Streaming text | streamText |
Yes |
| Structured JSON | generateObject |
No |
| Streaming JSON | streamObject |
Yes |
| Embeddings | embed / embedMany |
No |
| Rerank | rerank |
No |
Core Patterns
Generate Text
import { generateText } from 'ai';
const { text, usage } = await generateText({
model: 'anthropic/claude-sonnet-4.5',
system: 'You are a helpful assistant.',
prompt: 'What is the capital of France?',
});
Stream Text
import { streamText } from 'ai';
const result = streamText({
model: 'openai/gpt-4o',
prompt: 'Write a short story.',
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
Generate Structured Data
import { generateObject } from 'ai';
import { z } from 'zod';
const { object } = await generateObject({
model: 'openai/gpt-4o',
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a recipe for chocolate chip cookies.',
});
Tool Calling (Typed)
import { generateText, tool } from 'ai';
import { z } from 'zod';
const { text, toolCalls } = await generateText({
model: 'openai/gpt-4o',
tools: {
weather: tool({
description: 'Get weather for a location',
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => ({ temperature: 72, condition: 'sunny' }),
}),
},
prompt: 'What is the weather in San Francisco?',
});
Dynamic Tools (Runtime Schemas)
import { dynamicTool } from 'ai';
import { z } from 'zod';
const customTool = dynamicTool({
description: 'Execute a custom function',
inputSchema: z.object({}),
execute: async input => ({ ok: true, input }),
});
Multi-Step Tool Execution
import { generateText, stepCountIs } from 'ai';
const { steps } = await generateText({
model: 'openai/gpt-4o',
tools: { search, analyze, summarize },
stopWhen: stepCountIs(5),
prompt: 'Research and summarize AI developments.',
});
Tooling Checklist
- Use
tool()for typed inputs anddynamicTool()for unknown schemas. - Use
needsApprovalfor sensitive actions (tool-approval-request/response flow). - Use
stopWhenwithstepCountIs/hasToolCallfor multi-step loops. - Use
prepareStepfor per-step controls (model swap, toolChoice, activeTools, prompt compression). - Use
experimental_contextwhen tools need app-specific context. - Use
inputExamplesandstrictto improve tool call reliability.
MCP Integration (Model Context Protocol)
- Use
createMCPClient()to load MCP tools, resources, and prompts. - Prefer HTTP transport for production; use
Experimental_StdioMCPTransportonly for local Node.js servers. - Close MCP clients after use (try/finally or
onFinish).
See references/mcp-integration.md for transports, schema definition, outputSchema typing, and elicitation.
Reference Files
| Reference | When to Use |
|---|---|
references/text-generation.md |
generateText/streamText callbacks, streaming, response handling |
references/structured-data.md |
generateObject/streamObject, Output API, Zod patterns |
references/tool-calling.md |
tool/dynamicTool, approval flow, repair, activeTools, hooks |
references/dynamic-tools.md |
dynamicTool patterns, MCP + dynamic tools, large tool sets |
references/embeddings-rag.md |
embed/embedMany, rerank, chunking |
references/providers.md |
OpenAI/Anthropic/Google setup, registry, AI Gateway |
references/middleware.md |
wrapLanguageModel, built-in/custom middleware |
references/mcp-integration.md |
MCP client, transports, tools/resources/prompts/elicitation |
references/production.md |
Telemetry, error handling, testing, cost control |
references/migration.md |
v6 upgrade notes |
Error Handling
import { generateText, AI_APICallError } from 'ai';
try {
await generateText({ model: 'openai/gpt-4o', prompt: 'Hello' });
} catch (error) {
if (error instanceof AI_APICallError) {
console.error('API Error:', error.message);
}
}
Provider Setup
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: 'Hello!',
});
Version Guidance
- Use AI SDK v6+ with matching provider packages.
- Pin major versions in
package.jsonto avoid breaking changes.
More from bjornmelin/dev-skills
ai-sdk-ui
|
5pytest-dev
World-class pytest engineer for Python: write/refactor tests, fix flakiness, design fixtures/markers, add coverage, speed up suites (collection/runtime), and optimize CI (GitHub Actions sharding, xdist parallelism, caching). Use when asked about pytest best practices, pytest 9.x features (subtests, strict mode, TOML config), pytest plugins (xdist/cov/asyncio/mock/httpx), or test performance/CI tuning.
5dmc-py
|
4supabase-ts
Production-ready Supabase integration patterns for Next.js/React/TypeScript applications. Use when working with Supabase for (1) SSR authentication with @supabase/ssr, (2) Database operations and migrations, (3) Row Level Security (RLS) policies, (4) Storage buckets and file uploads, (5) Realtime channels and presence, (6) Edge Functions with Deno, (7) pgvector embeddings and semantic search, (8) Vercel deployment and connection pooling, (9) CLI operations (type generation, migrations). Triggers on Supabase client setup, auth patterns, RLS policies, storage uploads, realtime subscriptions, Edge Functions, vector search, or Vercel+Supabase deployment.
2convex-audit
Audit a Convex-backed codebase for schema quality, security, runtime boundaries, migrations, and function-surface risks. Use when the user asks for a Convex review, backend audit, contract analysis, or remediation plan. Do not use for green-field feature-spec generation; use convex-feature-spec for that.
1codex-sdk
>-
1