decart

SKILL.md

Decart

AI video/image generation platform with three APIs: Realtime (WebRTC, sub-500ms), Queue (async batch video), Process (sync image). SDKs for JavaScript, Python, and Swift. Auth via API key from platform.decart.ai — set DECART_API_KEY env var. Docs: https://docs.platform.decart.ai

When to use

  • Realtime video transformation — camera effects, video conferencing filters, AR/VR overlays, photo booths, live streaming
  • Batch video generation — marketing clips, social media content, product demos, animations from text or image
  • Image generation/editing — mockups, thumbnails, creative assets, image-to-image transformation
  • Avatar animation — virtual presenters, AI customer agents, talking heads animated with audio
  • Character transformation — transform user's live camera feed into characters (anime, fantasy, etc.)
  • Framework integration — Vercel AI SDK, TanStack AI, LangChain.js all supported

API selection

You want to... API Method Best for
Transform live camera/video Realtime client.realtime.connect() Interactive apps, below 500ms latency
Generate video from text/image Queue client.queue.submitAndPoll() Content pipelines, batch jobs
Edit existing video Queue client.queue.submitAndPoll() Video transformation
Generate/edit image Process client.process() Sync results, thumbnails
Animate avatar with audio Realtime client.realtime.connect() Virtual presenters, no camera needed

Model selection

Use case Model Type
Character transform live lucy_2_rt Realtime
Artistic style transfer mirage_v2 Realtime
Animate portrait with audio live_avatar Realtime
Text-to-video (best quality) lucy-pro-t2v Batch
Image-to-video lucy-pro-i2v Batch
Video-to-video (best) lucy-pro-v2v Batch
Video-to-video (fast) lucy-fast-v2v Batch
Motion control lucy-motion Batch
Text-to-image lucy-pro-t2i Batch
Image editing lucy-pro-i2i Batch

Quick start patterns

Realtime (connect + transform)

const model = models.realtime("lucy_2_rt");
const stream = await navigator.mediaDevices.getUserMedia({
  video: { frameRate: model.fps, width: model.width, height: model.height }
});
const rt = await client.realtime.connect(stream, {
  model,
  onRemoteStream: (s) => { videoEl.srcObject = s; }
});
await rt.set({ prompt: "Transform into anime character", image: refImage, enhance: true });

Queue (video generation)

const result = await client.queue.submitAndPoll({
  model: models.video("lucy-pro-t2v"),
  prompt: "A golden retriever in a meadow, cinematic",
  resolution: "720p",
});

Process (image generation)

const image = await client.process({
  model: models.image("lucy-pro-t2i"),
  prompt: "Cozy coffee shop, warm lighting",
  resolution: "720p",
});

Authentication

  • Server-side: Set DECART_API_KEY env var. The SDK picks it up automatically.
  • Client-side: Generate short-lived tokens on your backend via client.tokens.create(), then send token.apiKey to the frontend.
  • NEVER expose permanent API keys in browser or mobile code.

Camera constraints

Always use model.fps, model.width, model.height from the SDK to avoid scaling artifacts.

Boundaries

What agents CAN do

  • Generate integration code for all three APIs (Realtime, Queue, Process)
  • Help select models, configure SDK, set up authentication
  • Write WebRTC realtime streaming code
  • Build framework integrations (Vercel AI SDK, TanStack AI, LangChain.js)
  • Troubleshoot connection, moderation, and billing issues

What agents CANNOT do

  • Create or manage Decart accounts, API keys, or billing
  • Access the Decart dashboard or platform settings
  • Test realtime connections (requires actual WebRTC + camera)
  • Verify credit balance or usage
  • Bypass content moderation policies

Common gotchas

  • NEVER expose permanent API keys in client code. Use client tokens for browser/mobile.
  • ALWAYS use model.fps, model.width, model.height for camera constraints. Mismatched resolution causes latency and artifacts.
  • ALWAYS call disconnect() and stop media tracks when done. Leaving connections open causes memory leaks and billing.
  • Catch errors from set(), setPrompt(), setImage(). Moderation rejections throw here.
  • Listen to connectionChange events. The SDK auto-reconnects, but your UI must reflect the state.
  • Keep enhance: true (default) unless you need exact prompt control.
  • Use submitAndPoll() for queue jobs. Don't manually poll faster than every 2 seconds.
  • Image editing (i2i) only works with Process API, not Queue API. Queue is for video only.
  • Realtime sessions end silently when credits run out. Listen for disconnect events.
  • For Lucy 2 character transform, use clear portrait photos with good lighting.

Verification checklist

  • API key stored in environment variable, never hardcoded
  • Client tokens generated on backend for any browser/mobile realtime usage
  • Camera resolution matches model's fps, width, height constraints
  • Error handlers on set(), setPrompt(), setImage() calls
  • connectionChange listener updates UI on disconnect/reconnect
  • disconnect() called and media streams stopped on cleanup
  • enhance: true enabled unless exact prompt control needed
  • Queue API for video generation, Process API for images
  • Moderation errors caught and shown to user gracefully
  • Tested on real device for mobile (simulator lacks WebRTC)

Resources

Weekly Installs
4
First Seen
14 days ago
Installed on
amp4
cline4
opencode4
cursor4
kimi-cli4
codex4