tanstack-ai-vue-skilld
TanStack/ai @tanstack/ai-vue
Vue hooks for TanStack AI
Version: 0.6.6 (Mar 2026) Deps: @tanstack/ai-client@0.7.1 Tags: latest: 0.6.6 (Mar 2026)
References: Docs — API reference, guides
API Changes
This section documents version-specific API changes for @tanstack/ai-vue v0.6.1 (current v0.x series). This library is pre-1.0 — all v0.x releases are in scope.
-
BREAKING: Monolithic adapter factories removed —
openai(),anthropic(), etc. replaced by activity-specific functions:openaiText('gpt-5.2'),openaiSummarize('gpt-5-mini'),openaiImage('dall-e-3'), etc. Model name is now passed to the adapter factory, not tochat(). source -
BREAKING:
modelparameter removed fromchat()— model is now embedded in the adapter argument (e.g.,adapter: openaiText('gpt-5.2')instead ofadapter: openai(), model: 'gpt-4'). Passingmodelat the call site is silently ignored. source -
BREAKING: Nested
optionsobject flattened —chat({ options: { temperature, maxTokens, topP } })must be changed tochat({ temperature, maxTokens, topP }). Nested options are silently discarded. source -
BREAKING:
providerOptionsrenamed tomodelOptions—chat({ providerOptions: { ... } })must be updated tochat({ modelOptions: { ... } }). Silently ignored if not updated. source -
BREAKING:
toResponseStreamrenamed totoServerSentEventsStreamand now returnsReadableStreaminstead ofResponse— must manually createnew Response(stream, { headers }).AbortControlleris now a separate parameter:toServerSentEventsStream(stream, abortController). source -
BREAKING:
embedding()function removed — embeddings support eliminated entirely. Use provider SDKs directly or vector DB native embedding APIs. source -
BREAKING:
chat({ as: 'promise' })replaced by separatechatCompletion()function —asoption removed fromchat().chat({ as: 'stream' })is now justchat().chat({ as: 'response' })is nowchat()+toServerSentEventsStream(). source -
NEW:
useChatreturnsstatusreactive ref — tracks lifecycle as'ready' | 'submitted' | 'streaming' | 'error'. Previously there was no generation lifecycle state. source -
NEW:
sendMessage()acceptsMultimodalContentobject —sendMessage({ content: [{ type: 'text', content: '...' }, { type: 'image', source: { type: 'url', value: '...' } }] })enables image/audio/video/document content alongside text. Added in v0.5.0. source -
NEW:
agentLoopStrategyparameter replaces baremaxIterations: number— useagentLoopStrategy: maxIterations(5),untilFinishReason(['stop']), orcombineStrategies([...]). OldmaxIterationsnumber is converted automatically but deprecated. source -
NEW:
toolDefinition({ name, description, inputSchema, outputSchema?, needsApproval? })— creates isomorphic tool definitions. Call.server(fn)for server-side execution or.client(fn)for client-side execution. Replaces ad-hoc tool objects. source -
NEW:
@tanstack/ai-clientpackage —ChatClientclass provides framework-agnostic headless chat state management withsendMessage(),reload(),stop(),clear(),addToolResult(),addToolApprovalResponse()methods. source -
NEW: Connection adapter factories —
fetchServerSentEvents(url, options?),fetchHttpStream(url, options?),stream(fn)from@tanstack/ai-client. Pass touseChat({ connection: fetchServerSentEvents('/api/chat') })instead ofurl: '/api/chat'. source -
NEW:
extendAdapter(factory, customModels)+createModel(name, modalities)— adds custom/fine-tuned model names to existing adapter factories with full type inference. Avoidsas constcasts. source
Also changed: clientTools(...tools) NEW (typed tool array, discriminated union narrowing) · createChatClientOptions(options) NEW · InferChatMessages<T> NEW · toServerSentEventsResponse(stream, init?) NEW (returns Response) · toHttpStream(stream) NEW · toHttpResponse(stream) NEW · assertMessages({ adapter }, messages) NEW (type-level assertion) · ThinkingStreamChunk NEW (chunk type for model reasoning)
Best Practices
-
useChatreturnsDeepReadonly<ShallowRef<T>>refs — never reassignmessagesdirectly; usesetMessages()for manual updates. Changingconnectionorbodyoptions recreates the underlyingChatClient, requiring a component remount or akeyprop change to take effect -
Use
status(added v0.4.0) instead ofisLoadingfor granular lifecycle control —status.valuetracks'ready' | 'submitted' | 'streaming' | 'error', enabling distinct UI states for submission vs. active streaming source -
Pass client tool arrays through
clientTools()instead ofas const— eliminates the need for const assertion while enabling full discriminated union narrowing onpart.name,part.input, andpart.outputin message iteration source -
Wrap
useChatoptions withcreateChatClientOptions()and derive message types usingInferChatMessages<typeof chatOptions>— this propagates tool types through the entire message type, makingpart.namea literal union andpart.input/part.outputtyped from Zod schemas source -
Define tools with
toolDefinition()in a shared file, then call.server()in route handlers and.client()in Vue components — passing the bare definition tochat()signals the client will execute it, while passing.server()output executes it server-side automatically source -
Use Zod schemas (v4.2+) over raw JSON Schema for
inputSchema/outputSchemaintoolDefinition()andchat({ outputSchema })— JSON Schema infersanyfor tool inputs/outputs andunknownfor structured output return types, losing all downstream type safety source -
Set
agentLoopStrategy: maxIterations(n)explicitly when tools are present — the default is 5 iterations, which is too low for multi-step agentic workflows; useuntilFinishReason('stop')to exit as soon as the model finishes without hitting the limit source -
Subscribe to
aiEventClientwith{ withEventTarget: true }in production code — without this third argument the client only emits to the devtools event bus (absent in production builds); the flag also dispatches to the currentEventTargetfor application-level observability source -
Prefer
fetchServerSentEventsoverfetchHttpStreamfor client connections — SSE provides automatic reconnection; pass URL and options as functions (not static values) when headers likeAuthorizationmust be re-evaluated on every request source -
Use
extendAdapter(baseFactory, [createModel('model-name', ['text', 'image'])])to add TypeScript types for fine-tuned models or OpenAI-compatible proxies — this adds the model to the adapter's allowed type union with zero runtime overhead while preserving all original factory config parameters source