doppler-convert
SKILL.md
DOPPLER Convert Skill
Use this skill to add or re-convert models for Doppler runtime.
Mandatory Style Guides
Read these before non-trivial conversion or manifest-contract changes:
docs/style/general-style-guide.mddocs/style/javascript-style-guide.mddocs/style/config-style-guide.mddocs/style/command-interface-design-guide.md
Developer Guide Routing
For additive or extension-oriented conversion work, also open:
docs/developer-guides/README.md
Then route to the matching playbook:
- new checked-in conversion recipe:
docs/developer-guides/04-conversion-config.md - model-preset or family onboarding needed before conversion works:
docs/developer-guides/03-model-preset.mdordocs/developer-guides/composite-model-family.md - publication or curated metadata work:
docs/developer-guides/05-promote-model-artifact.md - new quantization/runtime artifact format:
docs/developer-guides/14-quantization-format.md
Execution Plane Contract
- JSON is the conversion contract (presets, manifests, converter config).
- JS is orchestration (parsing, conversion flow, validation, and artifact emission).
- WGSL is not selected here; compute policy is resolved later at runtime by manifest + kernel-path rules.
- Conversion must remain config-first and fail fast on unresolved kernel/policy requirements.
Primary Conversion Commands
# Convert from Safetensors directory (or GGUF file path) via unified CLI
npm run convert -- --config '{
"request": {
"inputDir": "INPUT_PATH",
"outputDir": "models/local/OUTPUT_ID",
"convertPayload": {
"converterConfig": {
"output": {
"modelBaseId": "OUTPUT_ID"
}
}
}
},
"run": {
"surface": "node"
}
}'
# Same conversion through direct Node helper with converter-config JSON
node tools/convert-safetensors-node.js INPUT_PATH --config ./converter-config.json --output-dir models/local/OUTPUT_ID
Notes:
INPUT_PATHcan be a Safetensors directory, diffusion directory, or.gguffile.- Unified CLI convert path is
tools/doppler-cli.js->runNodeCommand()->src/tooling/node-converter.js. - Browser surface is intentionally rejected for
convert.
Converter Config JSON (Optional)
Example:
{
"quantization": {
"weights": "q4k",
"embeddings": "f16",
"lmHead": "f16",
"q4kLayout": "row",
"computePrecision": "f16"
},
"output": {
"textOnly": false
}
}
Post-Conversion Verification (Mandatory)
# 1) Manifest exists
test -f models/local/OUTPUT_ID/manifest.json
# 2) Verify key manifest fields
jq '.modelId, .modelType, .quantization, .quantizationInfo, .inference.defaultKernelPath' models/local/OUTPUT_ID/manifest.json
# 3) Verify shards exist
ls models/local/OUTPUT_ID/shard_*.bin | wc -l
# 4) Sanity-run inference
npm run debug -- --config '{"request":{"modelId":"OUTPUT_ID","runtimePreset":"modes/debug"},"run":{"surface":"auto"}}' --json
For publication candidates, the verification bar is higher:
- Promote successful ad hoc configs
- If the conversion used a temporary or inline config and the model runs successfully, copy/promote that config into
tools/configs/conversion/so the conversion is reproducible.
- Run an actual coherence check
- Use a deterministic prompt and deterministic sampling, not just a load-only run.
- Recommended shape:
npm run debug -- \
--config '{"request":{"modelId":"OUTPUT_ID","runtimePreset":"modes/debug"},"run":{"surface":"auto"}}' \
--runtime-config '{"shared":{"tooling":{"intent":"verify"}},"inference":{"prompt":"Explain what this model is in one short sentence.","sampling":{"temperature":0,"topK":1}}}' \
--json
- Inspect
result.output(and summary metrics) for non-empty, coherent text.
- Pause for HITL review before promotion
- Summarize the prompt and observed output for the human.
- Before adding
models/catalog.jsonentries, syncing support-matrix metadata, or uploading/publishing to Hugging Face, stop and ask for confirmation.
- Offer optional perf validation
- If the output looks correct, propose:
npm run bench -- --config ... --jsonnode tools/vendor-bench.js ...node tools/compare-engines.js ...
Conversion Triage Contract
When conversion quality is in question, follow AGENTS.md triage protocol:
- Verify source dtypes.
- Verify manifest
quantization+quantizationInfo+ default kernel path. - Verify shard integrity vs manifest hashes.
- Verify sampled tensor numeric sanity source vs converted bytes.
- Verify layer pattern semantics (
every_nbehavior).
Canonical Files
tools/doppler-cli.jstools/convert-safetensors-node.jssrc/tooling/node-command-runner.jssrc/tooling/node-converter.jssrc/converter/core.jssrc/converter/conversion-plan.jsdocs/rdrr-format.mddocs/developer-guides/README.mdAGENTS.md
Related Skills
doppler-debugfor runtime correctness after conversiondoppler-benchfor perf regressions between variants
Weekly Installs
1
Repository
clocksmith/dopplerGitHub Stars
2
First Seen
7 days ago
Security Audits
Installed on
mcpjam1
claude-code1
replit1
junie1
windsurf1
zencoder1