ai-model-wechat
Installation
Summary
AI text generation and streaming for WeChat Mini Programs via wx.cloud.extend.AI.
- Supports two methods:
generateText()for non-streaming responses andstreamText()for real-time streaming with callback support (onText,onEvent,onFinish) - Built-in models include Hunyuan (recommended:
hunyuan-2.0-instruct-20251111) and DeepSeek (recommended:deepseek-v3.2) - API differs from JS/Node SDK:
generateText()returns raw model response;streamText()requires parameters wrapped in adataobject - Requires WeChat base library 3.7.1+ with no additional SDK installation needed
- Not suitable for browser/Web apps, Node.js backends, or image generation
SKILL.md
When to use this skill
Use this skill for calling AI models in WeChat Mini Program using wx.cloud.extend.AI.
Use it when you need to:
- Integrate AI text generation in a Mini Program
- Stream AI responses with callback support
- Call Hunyuan models from WeChat environment
Do NOT use for:
- Browser/Web apps → use
ai-model-webskill - Node.js backend or cloud functions → use
ai-model-nodejsskill - Image generation → use
ai-model-nodejsskill (not available in Mini Program) - HTTP API integration → use
http-apiskill
Available Providers and Models
CloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
hunyuan-exp |
hunyuan-turbos-latest, hunyuan-t1-latest, hunyuan-2.0-thinking-20251109, hunyuan-2.0-instruct-20251111 |
✅ hunyuan-2.0-instruct-20251111 |
deepseek |
deepseek-r1-0528, deepseek-v3-0324, deepseek-v3.2 |
✅ deepseek-v3.2 |
Prerequisites
- WeChat base library 3.7.1+
- No extra SDK installation needed
Initialization
// app.js
App({
onLaunch: function() {
wx.cloud.init({ env: "<YOUR_ENV_ID>" });
}
})
generateText() - Non-streaming
⚠️ Different from JS/Node SDK: Return value is raw model response.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
const res = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好" }],
});
// ⚠️ Return value is RAW model response, NOT wrapped like JS/Node SDK
console.log(res.choices[0].message.content); // Access via choices array
console.log(res.usage); // Token usage
streamText() - Streaming
⚠️ Different from JS/Node SDK: Must wrap parameters in data object, supports callbacks.
const model = wx.cloud.extend.AI.createModel("hunyuan-exp");
// ⚠️ Parameters MUST be wrapped in `data` object
const res = await model.streamText({
data: { // ⚠️ Required wrapper
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "hi" }]
},
onText: (text) => { // Optional: incremental text callback
console.log("New text:", text);
},
onEvent: ({ data }) => { // Optional: raw event callback
console.log("Event:", data);
},
onFinish: (fullText) => { // Optional: completion callback
console.log("Done:", fullText);
}
});
// Async iteration also available
for await (let str of res.textStream) {
console.log(str);
}
// Check for completion with eventStream
for await (let event of res.eventStream) {
console.log(event);
if (event.data === "[DONE]") { // ⚠️ Check for [DONE] to stop
break;
}
}
Error Handling Pattern
const model = wx.cloud.extend.AI.createModel("deepseek");
try {
const res = await model.generateText({
model: "deepseek-v3.2",
messages: [{ role: "user", content: "生成一段欢迎文案" }],
});
console.log(res.choices[0].message.content);
} catch (error) {
console.error("Mini Program AI request failed", error);
}
API Comparison: JS/Node SDK vs WeChat Mini Program
| Feature | JS/Node SDK | WeChat Mini Program |
|---|---|---|
| Namespace | app.ai() |
wx.cloud.extend.AI |
| generateText params | Direct object | Direct object |
| generateText return | { text, usage, messages } |
Raw: { choices, usage } |
| streamText params | Direct object | ⚠️ Wrapped in data: {...} |
| streamText return | { textStream, dataStream } |
{ textStream, eventStream } |
| Callbacks | Not supported | onText, onEvent, onFinish |
| Image generation | Node SDK only | Not available |
Type Definitions
streamText() Input
interface WxStreamTextInput {
data: { // ⚠️ Required wrapper object
model: string;
messages: Array<{
role: "user" | "system" | "assistant";
content: string;
}>;
};
onText?: (text: string) => void; // Incremental text callback
onEvent?: (prop: { data: string }) => void; // Raw event callback
onFinish?: (text: string) => void; // Completion callback
}
streamText() Return
interface WxStreamTextResult {
textStream: AsyncIterable<string>; // Incremental text stream
eventStream: AsyncIterable<{ // Raw event stream
event?: unknown;
id?: unknown;
data: string; // "[DONE]" when complete
}>;
}
generateText() Return
// Raw model response (OpenAI-compatible format)
interface WxGenerateTextResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: Array<{
index: number;
message: {
role: "assistant";
content: string;
};
finish_reason: string;
}>;
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}
Best Practices
- Check base library version - Ensure 3.7.1+ for AI support
- Use callbacks for UI updates -
onTextis great for real-time display - Check for [DONE] - When using
eventStream, checkevent.data === "[DONE]"to stop - Handle errors gracefully - Wrap AI calls in try/catch
- Remember the
datawrapper - streamText params must be wrapped indata: {...}
Weekly Installs
707
Repository
tencentcloudbase/skillsGitHub Stars
42
First Seen
Jan 22, 2026
Security Audits
Installed on
opencode633
codex629
gemini-cli620
github-copilot603
cursor597
kimi-cli593