smoke-testing-ncp-chat
Smoke Testing NCP Chat
Overview
Use the reusable smoke command instead of ad-hoc curl or UI clicking when a fast real-reply check is needed.
This smoke command:
- Sends one real chat message to a running NextClaw service
- Forces the request through the specified
session-typeandmodel - Reads the returned SSE event stream
- Prints pass/fail, assistant text, terminal event, and error details
- Exits non-zero when the route does not produce a real assistant reply
When to Use
- A quick check is needed to confirm that one concrete chat route can return a real assistant reply.
- A specific
session-type + modelpair needs to be validated without opening the UI. - A fast smoke is preferred over ad-hoc request assembly.
Command
pnpm smoke:ncp-chat -- --session-type native --model dashscope/qwen3-coder-next --port 18792
Quick Reference
pnpm smoke:ncp-chat -- --session-type codex --model dashscope/qwen3-coder-next --port 18792
pnpm smoke:ncp-chat -- --session-type claude --model minimax/MiniMax-M2.5 --port 18794
pnpm smoke:ncp-chat -- --session-type native --model openai/gpt-5.3-codex --base-url http://127.0.0.1:18792
pnpm smoke:ncp-chat -- --session-type codex --model dashscope/qwen3-coder-next --prompt "Reply exactly OK" --json
Success Criteria
- Exit code is
0 - Output shows
Result: PASS Assistant Textis non-empty- No
run.errorormessage.failed
When --json is used, the key checks are:
ok: trueassistantTextis non-emptyterminalEventis usuallyrun.finished
Common Mistakes
- Testing the wrong port:
pnpm dev startusually serves API on18792in this repo. - Forgetting
--session-type: the smoke should target the exact runtime under investigation. - Treating one runtime as proof for another runtime:
native,codex, andclaudeshould be checked explicitly.
More from peiiii/nextclaw
ui-ux-pro-max
Use when the user wants professional UI/UX design guidance, design-system generation, UX review, or stack-specific frontend guidance through a bundled local UI/UX Pro Max dataset and Python search runtime.
2impeccable
Use when the user wants distinctive, production-grade frontend design, anti-generic AI aesthetics, UX critique, technical UI audits, or final polish through bundled Impeccable references and an optional upstream detector CLI.
2lark-cli
Use when the user wants to operate Lark or Feishu via the local lark-cli (@larksuite/cli), including install, app credentials, OAuth, readiness checks, and safe read/write boundaries.
1opencli
Use when the user wants to use websites, browser login sessions, Electron apps, or external CLIs through a local OpenCLI setup, especially when setup guidance, readiness checks, and safe task execution are needed.
1find-skills
Use when the user wants to discover, evaluate, and install external agent skills from the open skills ecosystem, especially through the Vercel Skills CLI.
1superpowers
Use when the user wants a disciplined software development workflow with design-first planning, implementation plans, TDD, systematic debugging, code review, or verification-before-completion, adapted from obra/superpowers.
1