integrating-ai
SKILL.md
Integrating AI (Vercel AI SDK & Ollama)
When to use this skill
- When the user mentions "AI", "LLM", "Chatbot", "GPT", or "Ollama".
- When building features like "Explain this", "Generate text", or "Chat with PDF".
- When implementing streaming text responses.
Workflow
- Installation:
npm install ai @ai-sdk/openai @ai-sdk/anthropic ollama-ai-provider
- Backend Route (Edge/Serverless):
- Create an API route using
streamTextfromai.
- Create an API route using
- Frontend Hook:
- Use
useChatoruseCompletionfromai/reactto handle UI state automatically.
- Use
Instructions
1. API Route (Next.js App Router)
app/api/chat/route.ts
import { openai } from '@ai-sdk/openai'; // or 'ollama-ai-provider'
import { streamText } from 'ai';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4-turbo'), // or ollama('llama3')
messages,
system: 'You are a helpful assistant.',
});
return result.toDataStreamResponse();
}
2. Frontend UI (React)
components/Chat.tsx
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map(m => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}
3. Using Ollama (Local Models)
To run entirely open-source/local:
-
Run Ollama locally:
ollama run llama3 -
Change the model provider in the API route:
import { ollama } from 'ollama-ai-provider'; // ... model: ollama('llama3'),
Resources
Weekly Installs
2
Repository
pauloviccs/vicc…screatorGitHub Stars
3
First Seen
13 days ago
Security Audits
Installed on
trae2
claude-code2
github-copilot2
codex2
kimi-cli2
gemini-cli2