minimax-prompt
SKILL.md
MiniMax Prompt
Use MiniMax M2.5 for one-shot LLM tasks. Two execution methods available:
CLI Usage (minimax-prompt)
Shell wrapper at ~/.local/bin/minimax-prompt. Drop-in replacement for gemini -p.
# Simple prompt
minimax-prompt "Explain quantum computing in one paragraph"
# With -p flag (gemini-compatible)
minimax-prompt -p "Summarize this text"
# Pipe content + prompt
echo "Raw data here" | minimax-prompt -p "Summarize the above"
# Pipe content only (content becomes the prompt)
echo "What is 2+2?" | minimax-prompt
Timeout: 180s default. Output: plain text to stdout. Errors go to stderr.
Direct API Usage (from scripts)
For Python/shell scripts that can't depend on the CLI (e.g., Docker containers):
curl -sS https://api.minimax.io/anthropic/v1/messages \
-H "x-api-key: $MINIMAX_CODING_PLAN_API_KEY" \
-H "content-type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "MiniMax-M2.5",
"max_tokens": 8192,
"messages": [{"role": "user", "content": "Your prompt here"}]
}'
Response format (Anthropic Messages API):
{
"content": [{"type": "text", "text": "Response here"}],
"model": "MiniMax-M2.5",
"stop_reason": "end_turn"
}
API Endpoints
| Endpoint | Format | Speed |
|---|---|---|
https://api.minimax.io/anthropic/v1/messages |
Anthropic Messages | ~2s (preferred) |
https://api.minimax.io/v1/chat/completions |
OpenAI Completions | ~6s |
The Anthropic endpoint is 3x faster and separates thinking tags natively.
Model
- MiniMax-M2.5 — Only model available on Coding Plan
- MiniMax-M2.5-lightning is NOT available on Coding Plan
- 204K context window, 131K max output tokens
Setup
Requires MINIMAX_CODING_PLAN_API_KEY environment variable. Set via:
- OpenClaw:
~/.config/systemd/user/secrets.conf(referenced as${MINIMAX_CODING_PLAN_API_KEY}in openclaw.json) - Shell:
export MINIMAX_CODING_PLAN_API_KEY="your-key" - Docker:
-e MINIMAX_CODING_PLAN_API_KEY
Notes
- Coding Plan is a subscription — API calls are included, no per-token billing
- Replaces the retired
geminiskill (Gemini CLI banned, Feb 2026) - For structured agent turns, prefer
openclaw agent --localinstead - Stateless — no state files or caching needed
Weekly Installs
2
Repository
kesslerio/minim…aw-skillFirst Seen
14 days ago
Security Audits
Installed on
openclaw2
github-copilot2
codex2
kimi-cli2
gemini-cli2
cursor2