ai-sdk
Vercel AI SDK (v6)
CRITICAL — Your training data is outdated for this library. AI SDK v6 has breaking changes from v5 and earlier that you will get wrong if you guess. Before writing AI SDK code, fetch the docs at https://ai-sdk.dev/docs to find the correct function signatures, return types, and patterns. Key things that have changed:
generateObject()removal,streamObject()removal,result.filesfor image generation,Output.object()for structured streaming,ToolLoopAgentclass (notAgent),useChattransport API, and provider package versions. Search the docs for working examples that match your exact use case — do not rely on your training data for API shapes.
You are an expert in the Vercel AI SDK v6. The AI SDK is the leading TypeScript toolkit for building AI-powered applications. It provides a unified API across all LLM providers.
v6 Migration Pitfalls (Read First)
ai@^6.0.0is the umbrella package for AI SDK v6 (latest: 6.0.83).@ai-sdk/reactis^3.0.xin v6 projects (NOT^6.0.0).@ai-sdk/gatewayis^3.xin v6 projects (NOT^1.x).- In
createUIMessageStream, write withstream.writer.write(...)(NOTstream.write(...)). useChatno longer supportsbodyoronResponse; configure behavior throughtransport.- UI tool parts are typed as
tool-<toolName>(for exampletool-weather), nottool-invocation. DynamicToolCalldoes not provide typed.args; cast viaunknownfirst.TypedToolResultexposes.output(NOT.result).- The agent class is
ToolLoopAgent(NOTAgent—Agentis just an interface). - Constructor uses
instructions(NOTsystem). - Agent methods are
agent.generate()andagent.stream()(NOTagent.generateText()oragent.streamText()). - AI Gateway does not support embeddings; use
@ai-sdk/openaidirectly foropenai.embedding(...). useChat()with no transport defaults toDefaultChatTransport({ api: '/api/chat' })— explicit transport only needed for custom endpoints orDirectChatTransport.- Default
stopWhenfor ToolLoopAgent isstepCountIs(20), notstepCountIs(1)— override if you need fewer steps. strict: trueon tools is opt-in per tool, not global — only set on tools with provider-compatible schemas.- For agent API routes, use
createAgentUIStreamResponse({ agent, uiMessages })instead of manualstreamText+toUIMessageStreamResponse(). @ai-sdk/azurenow uses the Responses API by default — useazure.chat()for the previous Chat Completions API behavior.@ai-sdk/azureusesazure(notopenai) as the key forproviderMetadataandproviderOptions.@ai-sdk/google-vertexusesvertex(notgoogle) as the key forproviderMetadataandproviderOptions.@ai-sdk/anthropicsupports native structured outputs viastructuredOutputModeoption (Claude Sonnet 4.5+).
Installation
npm install ai@^6.0.0 @ai-sdk/react@^3.0.0
npm install @ai-sdk/openai@^3.0.41 # Optional: required for embeddings
npm install @ai-sdk/anthropic@^3.0.58 # Optional: direct Anthropic provider access
npm install @ai-sdk/vercel@^2.0.37 # Optional: v0 model provider (v0-1.0-md)
@ai-sdk/reactis a separate package — it is NOT included in theaipackage. For v6 projects, install@ai-sdk/react@^3.0.xalongsideai@^6.0.0.
If you install
@ai-sdk/gatewaydirectly, use@ai-sdk/gateway@^3.x(NOT^1.x).
Only install a direct provider SDK (e.g.,
@ai-sdk/anthropic) if you need provider-specific features not exposed through the gateway.
What AI SDK Can Do
AI SDK is not just text — it handles text, images, structured data, tool calling, and agents through one unified API:
| Need | How |
|---|---|
| Text generation / chat | generateText() or streamText() with model: "openai/gpt-5.4" |
| Image generation | generateText() with model: "google/gemini-3.1-flash-image-preview" — images in result.files. Always use this model, never older gemini-2.x models |
| Structured JSON output | generateText() with output: Output.object({ schema }) |
| Tool calling / agents | generateText() with tools: { ... } or ToolLoopAgent |
| Embeddings | embed() / embedMany() with @ai-sdk/openai |
If the product needs generated images (portraits, posters, cover art, illustrations, comics, diagrams), use generateText with an image model — do NOT use placeholder images or skip image generation.
Setup for AI Projects
For the smoothest experience, link to a Vercel project so AI Gateway credentials are auto-provisioned via OIDC:
vercel link # Connect to your Vercel project
# Enable AI Gateway at https://vercel.com/{team}/{project}/settings → AI Gateway
vercel env pull .env.local # Provisions VERCEL_OIDC_TOKEN automatically
npm install ai@^6.0.0 # Gateway is built in
npx ai-elements # Required: install AI text rendering components
This gives you AI Gateway access with OIDC authentication, cost tracking, failover, and observability — no manual API keys needed.
OIDC is the default auth: vercel env pull provisions a VERCEL_OIDC_TOKEN (short-lived JWT, ~24h). The @ai-sdk/gateway reads it automatically via @vercel/oidc. On Vercel deployments, tokens auto-refresh. For local dev, re-run vercel env pull when the token expires. No AI_GATEWAY_API_KEY or provider-specific keys needed.
Global Provider System (AI Gateway — Default)
In AI SDK 6, pass a "provider/model" string to the model parameter — it automatically routes through the Vercel AI Gateway:
import { generateText } from "ai";
const { text } = await generateText({
model: "openai/gpt-5.4", // plain string — routes through AI Gateway automatically
prompt: "Hello!",
});
No gateway() wrapper needed — plain "provider/model" strings are the simplest approach and are what the official Vercel docs recommend. The gateway() function is an optional explicit wrapper (useful when you need providerOptions.gateway for routing, failover, or tags):
import { gateway } from "ai";
// Explicit gateway() — only needed for advanced providerOptions
const { text } = await generateText({
model: gateway("openai/gpt-5.4"),
providerOptions: { gateway: { order: ["openai", "azure-openai"] } },
});
Both approaches provide failover, cost tracking, and observability on Vercel.
Model slug rules: Always use provider/model format. Version numbers use dots, not hyphens: anthropic/claude-sonnet-4.6 (not claude-sonnet-4-6). Default to openai/gpt-5.4 or anthropic/claude-sonnet-4.6. Never use outdated models like gpt-4o.
AI Gateway does not support embeddings. Use a direct provider SDK such as
@ai-sdk/openaifor embeddings.
Direct provider SDKs (
@ai-sdk/openai,@ai-sdk/anthropic, etc.) are only needed for provider-specific features not exposed through the gateway (e.g., Anthropic computer use, OpenAI fine-tuned model endpoints).
Core Functions
Text Generation
import { generateText, streamText } from "ai";
// Non-streaming
const { text } = await generateText({
model: "openai/gpt-5.4",
prompt: "Explain quantum computing in simple terms.",
});
// Streaming
const result = streamText({
model: "openai/gpt-5.4",
prompt: "Write a poem about coding.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
Structured Output
generateObject was removed in AI SDK v6. Use generateText with output: Output.object() instead. Do NOT import generateObject — it does not exist.
import { generateText, Output } from "ai";
import { z } from "zod";
const { output } = await generateText({
model: "openai/gpt-5.4",
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
}),
}),
prompt: "Generate a recipe for chocolate chip cookies.",
});
Tool Calling (MCP-Aligned)
In AI SDK 6, tools use inputSchema (not parameters) and output/outputSchema (not result), aligned with the MCP specification. Per-tool strict mode ensures providers only generate valid tool calls matching your schema.
import { generateText, tool } from "ai";
import { z } from "zod";
const result = await generateText({
model: "openai/gpt-5.4",
tools: {
weather: tool({
description: "Get the weather for a location",
inputSchema: z.object({
city: z.string().describe("The city name"),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
strict: true, // Providers generate only schema-valid tool calls
execute: async ({ city }) => {
const data = await fetchWeather(city);
return { temperature: data.temp, condition: data.condition };
},
}),
},
prompt: "What is the weather in San Francisco?",
});
Dynamic Tools (MCP Integration)
For tools with schemas not known at compile time (e.g., MCP server tools):
import { dynamicTool } from "ai";
const tools = {
unknownTool: dynamicTool({
description: "A tool discovered at runtime",
execute: async (input) => {
// Handle dynamically
return { result: "done" };
},
}),
};
Agents
The ToolLoopAgent class wraps generateText/streamText with an agentic tool-calling loop.
Default stopWhen is stepCountIs(20) (up to 20 tool-calling steps).
Agent is an interface — ToolLoopAgent is the concrete implementation.
import { ToolLoopAgent, stepCountIs, hasToolCall } from "ai";
const agent = new ToolLoopAgent({
model: "anthropic/claude-sonnet-4.6",
tools: { weather, search, calculator, finalAnswer },
instructions: "You are a helpful assistant.",
// Default: stepCountIs(20). Override to stop on a terminal tool or custom logic:
stopWhen: hasToolCall("finalAnswer"),
prepareStep: (context) => ({
// Customize each step — swap models, compress messages, limit tools
toolChoice: context.steps.length > 5 ? "none" : "auto",
}),
});
const { text } = await agent.generate({
prompt:
"Research the weather in Tokyo and calculate the average temperature this week.",
});
MCP Client
Connect to any MCP server and use its tools:
import { generateText } from "ai";
import { createMCPClient } from "@ai-sdk/mcp";
const mcpClient = await createMCPClient({
transport: {
type: "sse",
url: "https://my-mcp-server.com/sse",
},
});
const tools = await mcpClient.tools();
const result = await generateText({
model: "openai/gpt-5.4",
tools,
prompt: "Use the available tools to help the user.",
});
await mcpClient.close();
MCP OAuth for remote servers is handled automatically by @ai-sdk/mcp.
Tool Approval (Human-in-the-Loop)
Set needsApproval on any tool to require user confirmation before execution. The tool pauses in approval-requested state until the client responds.
import { streamText, tool } from "ai";
import { z } from "zod";
const result = streamText({
model: "openai/gpt-5.4",
tools: {
deleteUser: tool({
description: "Delete a user account",
inputSchema: z.object({ userId: z.string() }),
needsApproval: true, // Always require approval
execute: async ({ userId }) => {
await db.users.delete(userId);
return { deleted: true };
},
}),
processPayment: tool({
description: "Process a payment",
inputSchema: z.object({ amount: z.number(), recipient: z.string() }),
// Conditional: only approve large amounts
needsApproval: async ({ amount }) => amount > 1000,
execute: async ({ amount, recipient }) => {
return await processPayment(amount, recipient);
},
}),
},
prompt: "Delete user 123",
});
Client-side approval with useChat:
"use client";
import { useChat } from "@ai-sdk/react";
function Chat() {
const { messages, addToolApprovalResponse } = useChat();
return messages.map((m) =>
m.parts?.map((part, i) => {
// Tool parts in approval-requested state need user action
if (part.type.startsWith("tool-") && part.approval?.state === "approval-requested") {
return (
<div key={i}>
<p>Tool wants to run: {JSON.stringify(part.args)}</p>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: true })}>
Approve
</button>
<button onClick={() => addToolApprovalResponse({ id: part.approval.id, approved: false })}>
Deny
</button>
</div>
);
}
return null;
}),
);
}
Tool part states: input-streaming → input-available → approval-requested (if needsApproval) → output-available | output-error
Embeddings & Reranking
Use a direct provider SDK for embeddings. AI Gateway does not support embedding models.
import { embed, embedMany, rerank } from "ai";
import { openai } from "@ai-sdk/openai";
// Single embedding
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: "The quick brown fox",
});
// Batch embeddings
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: ["text 1", "text 2", "text 3"],
});
// Rerank search results by relevance
const { results } = await rerank({
model: cohere.reranker("rerank-v3.5"),
query: "What is quantum computing?",
documents: searchResults,
});
Image Generation & Editing
AI Gateway supports image generation. Use the google/gemini-3.1-flash-image-preview model — it is significantly better than older models like gemini-2.0-flash-exp-image-generation or gemini-2.0-flash-001.
Always use google/gemini-3.1-flash-image-preview for image generation. Do NOT use older models (gemini-2.0-*, gemini-2.5-*) — they produce much worse results and some do not support image output at all.
Multimodal LLMs (recommended — use generateText/streamText)
import { generateText, streamText } from "ai";
// generateText — images returned in result.files
const result = await generateText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "A futuristic cityscape at sunset",
});
const imageFiles = result.files.filter((f) => f.mediaType?.startsWith("image/"));
// Convert to data URL for display
const imageFile = imageFiles[0];
const dataUrl = `data:${imageFile.mediaType};base64,${Buffer.from(imageFile.data).toString("base64")}`;
// streamText — stream text, then access images after completion
const stream = streamText({
model: "google/gemini-3.1-flash-image-preview",
prompt: "A futuristic cityscape at sunset",
});
for await (const delta of stream.fullStream) {
if (delta.type === "text-delta") process.stdout.write(delta.text);
}
const finalResult = await stream;
console.log(`Generated ${finalResult.files.length} image(s)`);
Default image model: google/gemini-3.1-flash-image-preview — fast, high-quality. This is the ONLY recommended model for image generation.
Image-only models (use experimental_generateImage)
import { experimental_generateImage as generateImage } from "ai";
const { images } = await generateImage({
model: "google/imagen-4.0-generate-001",
prompt: "A futuristic cityscape at sunset",
aspectRatio: "16:9",
});
Other image-only models: google/imagen-4.0-ultra-generate-001, bfl/flux-2-pro, bfl/flux-kontext-max, xai/grok-imagine-image-pro.
Saving generated images
import fs from "node:fs";
// From multimodal LLMs (result.files)
for (const [i, file] of imageFiles.entries()) {
const ext = file.mediaType?.split("/")[1] || "png";
await fs.promises.writeFile(`output-${i}.${ext}`, file.uint8Array);
}
// From image-only models (result.images)
for (const [i, image] of images.entries()) {
const buffer = Buffer.from(image.base64, "base64");
await fs.promises.writeFile(`output-${i}.png`, buffer);
}
UI Hooks (React)
MANDATORY — Always use AI Elements for AI text: AI SDK models always produce markdown — even short prose contains **bold**, ## headings, `code`, and ---. There is no "plain text" mode. Every AI-generated string displayed in a browser MUST be rendered through AI Elements.
- Chat messages: Use AI Elements
<Message message={message} />— handles text, tool calls, code blocks, reasoning, streaming. - Any other AI text (streaming panels, workflow events, reports, briefings, narratives, summaries, perspectives): Use
<MessageResponse>{text}</MessageResponse>from@/components/ai-elements/message. <MessageResponse>wraps Streamdown with code highlighting, math, mermaid, and CJK plugins — works for any markdown string, including streamed text.- Never render AI output as raw
{text},<p>{content}</p>, or<div>{stream}</div>— this always produces ugly unformatted output with visible markdown syntax. - No exceptions: Even if you think the response will be "simple prose", models routinely add markdown formatting. Always use AI Elements.
⤳ skill: ai-elements — Full component library, decision guidance, and troubleshooting for AI interfaces
Transport Options
useChat uses a transport-based architecture. Three built-in transports:
| Transport | Use Case |
|---|---|
DefaultChatTransport |
HTTP POST to API routes (default — sends to /api/chat) |
DirectChatTransport |
In-process agent communication without HTTP (SSR, testing) |
TextStreamChatTransport |
Plain text stream protocol |
Default behavior: useChat() with no transport config defaults to DefaultChatTransport({ api: '/api/chat' }).
With AI Elements (Recommended)
"use client";
import { useChat } from "@ai-sdk/react";
import { Conversation } from "@/components/ai-elements/conversation";
import { Message } from "@/components/ai-elements/message";
function Chat() {
// No transport needed — defaults to DefaultChatTransport({ api: '/api/chat' })
const { messages, sendMessage, status } = useChat();
return (
<Conversation>
{messages.map((message) => (
<Message key={message.id} message={message} />
))}
</Conversation>
);
}
AI Elements handles UIMessage parts (text, tool calls, reasoning, images) automatically. Install with npx ai-elements.
⤳ skill: ai-elements — Full component library for AI interfaces ⤳ skill: json-render — Manual rendering patterns for custom UIs
With DirectChatTransport (No API Route Needed)
"use client";
import { useChat } from "@ai-sdk/react";
import { DirectChatTransport } from "ai";
import { myAgent } from "@/lib/agent"; // a ToolLoopAgent instance
function Chat() {
const { messages, sendMessage, status } = useChat({
transport: new DirectChatTransport({ agent: myAgent }),
});
// Same UI as above — no /api/chat route required
}
Useful for SSR scenarios, testing without network, and single-process apps.
v6 changes from v5:
useChat({ api })→useChat({ transport: new DefaultChatTransport({ api }) })handleSubmit→sendMessage({ text })input/handleInputChange→ manage your ownuseStatebody/onResponseoptions were removed fromuseChat; usetransportto configure requests/responsesisLoading→status === 'streaming' || status === 'submitted'message.content→ iteratemessage.parts(UIMessage format)
Choose the correct streaming response helper
toUIMessageStreamResponse()is foruseChat+DefaultChatTransportUIMessage-based chat UIs. Use it when you need tool calls, metadata, reasoning, and other rich message parts.toTextStreamResponse()is for non-browser clients only — CLI tools, server-to-server pipes, or programmatic consumers that process raw text without rendering it in a UI. If the text will be displayed in a browser, usetoUIMessageStreamResponse()+ AI Elements instead.- Warning: Do not return
toUIMessageStreamResponse()to a plainfetch()client unless that client intentionally parses the AI SDK UI message stream protocol. - Warning: Do not use
toTextStreamResponse()+ manualfetch()stream reading as a way to skip AI Elements. If the output goes to a browser, useuseChat+<MessageResponse>or<Message>.
Server-side for useChat (API Route)
// app/api/chat/route.ts
import { streamText, convertToModelMessages, stepCountIs } from "ai";
import type { UIMessage } from "ai";
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
// IMPORTANT: convertToModelMessages is async in v6
const modelMessages = await convertToModelMessages(messages);
const result = streamText({
model: "openai/gpt-5.4",
messages: modelMessages,
tools: {
/* your tools */
},
// IMPORTANT: use stopWhen with stepCountIs for multi-step tool calling
// maxSteps was removed in v6 — use this instead
stopWhen: stepCountIs(5),
});
// Use toUIMessageStreamResponse (not toDataStreamResponse) for chat UIs
return result.toUIMessageStreamResponse();
}
Server-side with ToolLoopAgent (Agent API Route)
Define a ToolLoopAgent and use createAgentUIStreamResponse for the API route:
// lib/agent.ts
import { ToolLoopAgent, stepCountIs } from "ai";
export const myAgent = new ToolLoopAgent({
model: "openai/gpt-5.4",
instructions: "You are a helpful assistant.",
tools: { /* your tools */ },
stopWhen: stepCountIs(5),
});
// app/api/chat/route.ts — agent API route
import { createAgentUIStreamResponse } from "ai";
import { myAgent } from "@/lib/agent";
export async function POST(req: Request) {
const { messages } = await req.json();
return createAgentUIStreamResponse({ agent: myAgent, uiMessages: messages });
}
Or use DirectChatTransport on the client to skip the API route entirely.
Server-side for text-only clients (non-browser only)
This pattern is for CLI tools, server-to-server pipes, and programmatic consumers. If the response will be displayed in a browser UI, use
toUIMessageStreamResponse()+ AI Elements instead — even for "simple" streaming text panels.
// app/api/generate/route.ts — for CLI or server consumers, NOT browser UIs
import { streamText } from "ai";
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: "openai/gpt-5.4",
prompt,
});
return result.toTextStreamResponse();
}
Language Model Middleware
Intercept and transform model calls for RAG, guardrails, logging:
import { wrapLanguageModel } from "ai";
const wrappedModel = wrapLanguageModel({
model: "openai/gpt-5.4",
middleware: {
transformParams: async ({ params }) => {
// Inject RAG context, modify system prompt, etc.
return { ...params, system: params.system + "\n\nContext: ..." };
},
wrapGenerate: async ({ doGenerate }) => {
const result = await doGenerate();
// Post-process, log, validate guardrails
return result;
},
},
});
Provider Routing via AI Gateway
import { generateText } from "ai";
import { gateway } from "ai";
const result = await generateText({
model: gateway("anthropic/claude-sonnet-4.6"),
prompt: "Hello!",
providerOptions: {
gateway: {
order: ["bedrock", "anthropic"], // Try Bedrock first
models: ["openai/gpt-5.4"], // Fallback model
only: ["anthropic", "bedrock"], // Restrict providers
user: "user-123", // Usage tracking
tags: ["feature:chat", "env:production"], // Cost attribution
},
},
});
DevTools
npx @ai-sdk/devtools
# Opens http://localhost:4983 — inspect LLM calls, agents, token usage, timing
Key Patterns
- Default to AI Gateway with OIDC — pass
"provider/model"strings (e.g.,model: "openai/gpt-5.4") to route through the gateway automatically.vercel env pullprovisions OIDC tokens. No manual API keys needed. Thegateway()wrapper is optional (only needed forproviderOptions.gateway). - Set up a Vercel project for AI —
vercel link→ enable AI Gateway athttps://vercel.com/{team}/{project}/settings→ AI Gateway →vercel env pullto get OIDC credentials. Never manually create.env.localwith provider-specific API keys. - Always use AI Elements for any AI text in a browser —
npx ai-elementsinstalls production-ready Message, Conversation, and Tool components. Use<Message>for chat and<MessageResponse>for any other AI-generated text (streaming panels, summaries, reports). AI models always produce markdown — there is no scenario where raw{text}rendering is correct. ⤳ skill: ai-elements - Always stream for user-facing AI — use
streamText+useChat, notgenerateText - UIMessage chat UIs —
useChat()defaults toDefaultChatTransport({ api: '/api/chat' }). On the server:convertToModelMessages()+toUIMessageStreamResponse(). For no-API-route setups:DirectChatTransport+ Agent. - Text-only clients (non-browser) —
toTextStreamResponse()is only for CLI tools, server pipes, and programmatic consumers. If the text is displayed in a browser, usetoUIMessageStreamResponse()+ AI Elements - Use structured output for extracting data —
generateTextwithOutput.object()and Zod schemas - Use
ToolLoopAgentfor multi-step reasoning — not manual loops. DefaultstopWhenisstepCountIs(20). UsecreateAgentUIStreamResponsefor agent API routes. - Use DurableAgent (from Workflow DevKit) for production agents that must survive crashes
- Use
mcp-to-ai-sdkto generate static tool definitions from MCP servers for security - Use
needsApprovalfor human-in-the-loop — set on any tool to pause execution until user approves; supports conditional approval via async function - Use
strict: trueper tool — opt-in strict mode ensures providers only generate schema-valid tool calls; set on individual tools, not globally
Common Pitfall: Structured Output Property Name
In v6, generateText with Output.object() returns the parsed result on the output property (NOT object):
// CORRECT — v6
const { output } = await generateText({
model: 'openai/gpt-5.4',
output: Output.object({ schema: mySchema }),
prompt: '...',
})
console.log(output) // ✅ parsed object
// WRONG — v5 habit
const { object } = await generateText({ ... }) // ❌ undefined — `object` doesn't exist in v6
This is one of the most common v5→v6 migration mistakes. The config key is output and the result key is also output.
Migration from AI SDK 5
Run npx @ai-sdk/codemod upgrade (or npx @ai-sdk/codemod v6) to auto-migrate. Preview with npx @ai-sdk/codemod --dry upgrade. Key changes:
generateObject/streamObject→generateText/streamTextwithOutput.object()parameters→inputSchemaresult→outputmaxSteps→stopWhen: stepCountIs(N)(importstepCountIsfromai)CoreMessage→ModelMessage(useconvertToModelMessages()— now async)ToolCallOptions→ToolExecutionOptionsExperimental_Agent→ToolLoopAgent(concrete class;Agentis just an interface)system→instructions(onToolLoopAgent)agent.generateText()→agent.generate()agent.streamText()→agent.stream()experimental_createMCPClient→createMCPClient(stable)- New:
createAgentUIStreamResponse({ agent, uiMessages })for agent API routes - New:
callOptionsSchema+prepareCallfor per-call agent configuration useChat({ api })→useChat({ transport: new DefaultChatTransport({ api }) })useChatbody/onResponseoptions removed → configure with transporthandleSubmit/input→sendMessage({ text })/ manage own statetoDataStreamResponse()→toUIMessageStreamResponse()(for chat UIs)createUIMessageStream: usestream.writer.write(...)(notstream.write(...))- text-only clients / text stream protocol →
toTextStreamResponse() message.content→message.parts(tool parts usetool-<toolName>, nottool-invocation)- UIMessage / ModelMessage types introduced
DynamicToolCall.argsis not strongly typed; cast viaunknownfirstTypedToolResult.result→TypedToolResult.outputai@^6.0.0is the umbrella package@ai-sdk/reactmust be installed separately at^3.0.x@ai-sdk/gateway(if installed directly) is^3.x, not^1.x- New:
needsApprovalon tools (boolean or async function) for human-in-the-loop approval - New:
strict: trueper-tool opt-in for strict schema validation - New:
DirectChatTransport— connectuseChatto an Agent in-process, no API route needed - New:
addToolApprovalResponseonuseChatfor client-side approval UI - Default
stopWhenchanged fromstepCountIs(1)tostepCountIs(20)forToolLoopAgent - New:
ToolCallOptionstype renamed toToolExecutionOptions - New:
Tool.toModelOutputnow receives({ output })object, not bareoutput - New:
isToolUIPart→isStaticToolUIPart;isToolOrDynamicToolUIPart→isToolUIPart - New:
getToolName→getStaticToolName;getToolOrDynamicToolName→getToolName - New:
@ai-sdk/azuredefaults to Responses API; useazure.chat()for Chat Completions - New:
@ai-sdk/anthropicstructuredOutputModefor native structured outputs (Claude Sonnet 4.5+) - New:
@ai-sdk/langchainrewritten —toBaseMessages(),toUIMessageStream(),LangSmithDeploymentTransport - New: Provider-specific tools — Anthropic (memory, code execution), OpenAI (shell, patch), Google (maps, RAG), xAI (search, code)
unknownfinish reason removed → now returned asother- Warning types consolidated into single
Warningtype exported fromai