nebula-ipc-integration
SKILL.md
Nebula IPC Integration Guide
详细文档
详细 API、配置字段、错误码、常量参见同目录的 references/ 子文件夹:
| 目的 | 文件 |
|---|---|
| 快速接入流程 | references/guides/quick-start.md |
| SessionConfig 构建 | references/guides/session-config.md |
| AI 目标检测接入 | references/guides/ai-detection.md |
| 涂鸦 SaaS 微应用 Terser 报错 | references/guides/tuya-saas-micro-app.md |
| Worker 路径(Webpack/Vite) | references/guides/worker-webpack-vite.md |
| IpcClient 全量 API | references/api/ipc-client.md |
| IpcPlaybackClient API | references/api/playback-client.md |
| RTC / MQTT 通道 API | references/api/base-channel.md |
| 事件常量 | references/api/events.md |
| 错误码 | references/error-codes.md |
| 默认常量与枚举 | references/constants.md |
Overview
@nebula-media-sdk/ipc 封装了 IPC 摄像头的预览(实时拉流)与回放两大场景,底层通过 WebRTC P2P 或 HLS/SDHLS 传输。整合时的核心流程:构建 SessionConfig → 创建 IpcClient → connect → 监听事件 → 操作(截图/录制/切画质)→ close。
1. 依赖包
import { IpcClient, IpcClientConfig, SessionConfig, PlaybackClientEnum } from '@nebula-media-sdk/ipc';
import { IpcPlaybackClient, IpcPlaybackClientConfig } from '@nebula-media-sdk/ipc';
import { NebulaMqttChannel, NebulaWebSocketChannel, NEBULA_EVENTS } from '@nebula-media-sdk/base';
import { NebulaAIObjectDetectorExtension } from '@nebula-media-sdk/ai-object-detection';
2. 构建 MQTT / WebSocket 消息通道
// MQTT 通道
const mqttChannel = new NebulaMqttChannel({
clientType: 'mqtt',
clientId: mqttConfig.username,
url: `wss://${mqttConfig.username}:${mqttConfig.password}@${mqttBrokerHost}`,
username: mqttConfig.username,
password: mqttConfig.password,
clientOptions: {},
});
// WebSocket 通道(替代 MQTT)
const wsChannel = new NebulaWebSocketChannel({
clientType: 'websocket',
clientId: wsConfig.username,
url: `wss://${wsConfig.host}`,
username: wsConfig.username,
password: wsConfig.password,
clientOptions: {},
});
// 监听授权失败
mqttChannel.notification.addEventListener(NEBULA_EVENTS.SIGNALING_SERVER_AUTHORIZATION_FAILED, () => {
console.error('MQTT 授权失败');
});
3. 构建 SessionConfig
SessionConfig 是连接设备所需的完整配置对象,由后端 P2P 配置数据映射而来。
3a. P2P v3(p2pType === 4)
const sessionConfig: SessionConfig = {
messageChannel: mqttChannel, // 或 wsChannel
channelConfig: {
p2pType: p2pConfigRes.p2pType, // 4
pv: p2pConfigRes.protocolVersion,
authorization: p2pConfigRes.p2pConfig.auth,
motoId: p2pConfigRes.p2pConfig.motoId,
messageId: mqttConfig.username.split('_')[1], // 从 MQTT username 解析
ices: p2pConfigRes.p2pConfig.ices.map((ice: any) => ({
...ice,
ttl: ice.ttl ?? 3600,
})),
customChannelOptions: {},
},
mediaStreamInfo: {
webrtc: skill.webrtc,
videos: skill.videos, // IpcMediaStreamInfoVideoConfig[]
audios: skill.audios, // IpcMediaStreamInfoAudioConfig[]
lowPower: skill.lowPower ?? 0,
},
supportedMediaCapabilities: {
clarity: (skill.webrtc >>> 0).toString(2).slice(-5, -4) === '1', // 是否支持切画质
speak: true,
record: true,
ptz: true,
replay: true,
},
deviceConfig: {
devId: p2pConfigRes.id,
rotate: p2pConfigRes.rotate ?? '',
gatewayId: p2pConfigRes.gatewayId ?? '',
nodeId: p2pConfigRes.nodeId ?? '',
},
clarity: 'SD',
};
3b. P2P v4(p2pType === 8)
const sessionConfig: SessionConfig = {
messageChannel: mqttChannel,
channelConfig: {
p2pType: p2pConfigRes.p2pType, // 8
pv: p2pConfigRes.protocolVersion ?? '2.2',
motoId: p2pConfigRes.p2pConfig.moto_id,
messageId: mqttConfig.username.split('_')[1],
ices: p2pConfigRes.p2pConfig.ice_token.servers.map((s: any) => ({
urls: s.url,
username: s.username,
credential: s.credential,
ttl: s.ttl ?? 3600,
})),
iceToken: p2pConfigRes.p2pConfig.ice_token,
logToken: p2pConfigRes.p2pConfig.log_token,
from: p2pConfigRes.p2pConfig.from,
to: p2pConfigRes.p2pConfig.to,
v: p2pConfigRes.p2pConfig.v,
expired: p2pConfigRes.p2pConfig.expired,
username: p2pConfigRes.p2pConfig.username,
password: p2pConfigRes.p2pConfig.password,
cloudCid: p2pConfigRes.p2pConfig.cloud_cid,
customChannelOptions: {
enableDataChannel: p2pConfigRes.p2pConfig.enable_datachannel === 1,
},
},
mediaStreamInfo: {
webrtc: p2pConfigRes.skillV4.webrtc,
videos: p2pConfigRes.skillV4.videos,
audios: p2pConfigRes.skillV4.audios,
lowPower: p2pConfigRes.isLowPower ?? 0,
},
supportedMediaCapabilities: {
clarity: (p2pConfigRes.skillV4.webrtc >>> 0).toString(2).slice(-5, -4) === '1',
speak: true, record: true, ptz: true, replay: true,
},
deviceConfig: { devId: p2pConfigRes.id },
clarity: 'SD',
};
4. 预览拉流(IpcClient)
const container = document.getElementById('video-container') as HTMLElement;
const client = new IpcClient({
...sessionConfig,
container,
});
// 监听连接状态事件
client.on(NEBULA_EVENTS.CONNECTION_SUCCESS, () => console.log('连接成功'));
client.on(NEBULA_EVENTS.CONNECTION_FAILED, () => console.log('连接失败'));
client.on(NEBULA_EVENTS.CONNECTION_CLOSED, () => console.log('连接关闭'));
client.on(NEBULA_EVENTS.CONNECTION_DISCONNECTED, () => console.log('连接断开'));
client.on(NEBULA_EVENTS.RTC_CONNECTION_ESTABLISHED, () => console.log('RTC 建立'));
// 开始拉流
await client.connect();
// 停止拉流
client.close();
5. 操作:画质切换 / 截图 / 录制 / 音量
// 切换画质(SD ↔ HD)
await client.switchClarity('HD'); // 或 'SD'
// 截图(自动下载)- 需要先注册 NebulaMediaSnapshotDownloadPlugin
client.snapshot();
// 录制
client.startRecording();
client.stopRecording();
// 音量(0 ~ 1)
client.setVolume(0.8);
const vol = client.getVolume();
// 对讲
client.startToTalk();
client.stopToTalk();
截图 / 录制下载插件:需在
mediaStreamAbilityExtensions中注册插件,或在connect()之后调用channel.registerPlugin(new NebulaMediaSnapshotDownloadPlugin())。
6. 回放(IpcPlaybackClient)
const playbackClient = new IpcPlaybackClient({
container,
clientType: PlaybackClientEnum.HLS, // HLS | SDHLS | RTC
config: {
queryResource: async (timestamp) => { /* 返回 FragmentTimestamp[] */ },
queryUrl: async (fragment) => 'https://...m3u8',
},
});
// 查询录像片段(按日期)
const fragments = await playbackClient.queryFragments(Date.now());
// 播放指定时间戳
await playbackClient.play(fragments[0].startTime);
// 暂停 / 停止
playbackClient.pause();
await playbackClient.stop();
// 截图 / 录制(同预览)
playbackClient.snapshot('my-snapshot');
playbackClient.startRecording();
playbackClient.stopRecording('my-video');
// 监听事件
playbackClient.on(NEBULA_EVENTS.CONNECTION_SUCCESS, () => {});
// 关闭连接
await playbackClient.close();
7. AI 能力(目标检测 / 手势识别)
// 1. 预加载模型(应用启动时尽早调用,避免首次检测延迟)
const aiExtension = new NebulaAIObjectDetectorExtension({ container });
await aiExtension.preloadObjectDetectionModel({
targetClasses: ['person'], // 只检测人
enableGPU: true,
backend: 'webgl',
enableCache: true, // IndexedDB 缓存,避免重复下载
});
// 2. 注册到 IpcClient(connect 之前或之后均可)
client.registerAIExtension(aiExtension);
// 3. 启动检测(connect 成功后调用)
aiExtension.startAIExtensionProcessing(
100, // 检测间隔 ms
{
objectDetection: true, // 显示检测框
faceBlur: false, // 人脸模糊
backgroundMosaic: false, // 背景马赛克
subjectDetection: false, // 主体检测
gestureDetection: false, // 手势识别
},
(result) => {
console.log('检测结果', result.detectionResults, '耗时', result.processingTime, 'ms');
},
);
// 4. 停止检测
aiExtension.stopAIExtensionProcessing();
// 5. 销毁
aiExtension.dispose();
手势识别:同样需要预加载
await aiExtension.preloadGestureDetectionModel(),然后在gestureDetection: true时自动运行。
8. 事件列表速查
| 事件常量 | 说明 |
|---|---|
CONNECTION_SUCCESS |
整体连接成功 |
CONNECTION_FAILED |
连接失败 |
CONNECTION_CLOSED |
主动关闭 |
CONNECTION_DISCONNECTED |
意外断开 |
RTC_CONNECTION_ESTABLISHED |
WebRTC 通道建立 |
RTC_CONNECTION_DISCONNECTED |
WebRTC 断开 |
RTC_CONNECTION_FAILED |
WebRTC 失败 |
SIGNALING_SERVER_CONNECTED |
MQTT/WS 信令连接成功 |
SIGNALING_SERVER_AUTHORIZATION_FAILED |
信令授权失败 |
RENDERER_FIRST_FRAME_RECEIVED |
收到第一帧 |
AUDIO_INTERCOM_CHANNEL_OPEN_SUCCESS |
对讲通道开启 |
RECORDING_START_SUCCESS / RECORDING_STOP_SUCCESS |
录制开始/结束 |
SNAPSHOT_SUCCESS / SNAPSHOT_FAILED |
截图成功/失败 |
DATA_BITRATE |
码率变化 |
NETWORK_QUALITY_CHANGE |
网络质量变化 |
常见错误
| 错误 | 原因 | 修复 |
|---|---|---|
err_mqtt_config_not_valid |
MQTT 配置缺少 clientId/url/username/password |
检查 messageFormat() 返回值 |
err_channel_not_initialized |
调用操作前没有 connect() |
确保 await client.connect() 完成后再操作 |
| AI 模型未就绪警告 | preloadModel 还未完成就调用 start |
先 await preloadObjectDetectionModel() |
switchClarity 无效果 |
supportedMediaCapabilities.clarity 为 false |
检查 skill.webrtc 的第 4 位标志位 |
| 截图黑屏 | WebGL 渲染模式下截图需注册 Snapshot 插件 | 注册 NebulaMediaSnapshotDownloadPlugin |
Weekly Installs
2
Source
developer.tuya.…lls-betaFirst Seen
5 days ago
Installed on
amp2
cline2
opencode2
cursor2
kimi-cli2
codex2