vercel-ai-sdk-best-practices
SKILL.md
Vercel Ai Sdk Best Practices Skill
- Use
streamTextfor streaming text responses from AI models. - Use
streamObjectfor streaming structured JSON responses. - Implement proper error handling with
onFinishcallback. - Use
onChunkfor real-time UI updates during streaming. - Prefer server-side streaming for better performance and security.
- Use
smoothStreamfor smoother streaming experiences. - Implement proper loading states for AI responses.
- Use
useChatfor client-side chat interfaces when needed. - Use
useCompletionfor client-side text completion interfaces. - Handle rate limiting and quota management appropriately.
- Implement proper authentication and authorization for AI endpoints.
- Use environment variables for API keys and sensitive configuration.
- Cache AI responses when appropriate to reduce costs.
- Implement proper logging for debugging and monitoring.
Iron Laws
- ALWAYS use streaming responses with
streamTextorstreamObjectfor AI outputs rather than blocking calls - NEVER expose API keys or model provider secrets in client-side code — use server-only route handlers
- ALWAYS implement error boundaries and loading states for streaming AI responses in React components
- NEVER call AI SDK functions directly from Client Components — use Server Actions or API routes
- ALWAYS specify
maxTokensand timeout limits to prevent runaway AI calls from exhausting budgets
Anti-Patterns
| Anti-Pattern | Why It Fails | Correct Approach |
|---|---|---|
Blocking generateText in UI routes |
Hangs the request, poor UX for long responses | Use streamText with streaming response |
| API keys in client-side code | Secret exposure, security vulnerability | Move AI calls to Server Actions or API routes |
| No error boundary for streaming | Uncaught errors break the entire component tree | Wrap streaming components in error boundaries |
| Calling AI SDK in Client Components | Exposes provider keys, breaks SSR | Use Server Actions ("use server") or route handlers |
| No token or timeout limits | Runaway calls exhaust credits and stall users | Always set maxTokens and request timeout |
Memory Protocol (MANDATORY)
Before starting:
cat .claude/context/memory/learnings.md
After completing: Record any new patterns or exceptions discovered.
ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.
Weekly Installs
25
Repository
oimiragieo/agent-studioGitHub Stars
16
First Seen
Jan 27, 2026
Security Audits
Installed on
github-copilot24
codex23
kimi-cli23
gemini-cli23
cursor23
opencode23