wksp
Open a new terminal tab, cd to the given folder or worktree, and launch a coding tool (Claude Code or OpenCode) with a chosen model. Optionally auto-enter handoff mode.
macOS only. Uses AppleScript to open a new tab in iTerm2 (preferred) or Terminal.app (fallback).
Usage
/wksp <name-or-path> [with|using|as <model>]
The <name-or-path> argument is required. If not provided, list worktrees and ask the user to pick one.
Natural language model specification:
/wksp spawn2 with haiku/wksp spawn2 using sonnet/wksp spawn2 as opus
Sandbox: CRITICAL
The pretrust step writes to ~/.claude.json which requires dangerouslyDisableSandbox: true.
The osascript command MUST also run with dangerouslyDisableSandbox: true.
Workflow
Step 1: Parse natural language and resolve folder path
python3 -c "import sys; sys.path.insert(0, '~/.claude/skills/wksp'); from wksp_ops import parse_spawn_command; path, model = parse_spawn_command('<ARG>'); print(f'{path} {model or \"\"}'.strip())"
Then resolve path:
python3 ~/.claude/skills/wksp/wksp_ops.py resolve-spawn-path --arg '<PATH>'
If multiple matches are returned, ask user to choose. If none, stop.
Step 1.5: Detect available tools and build model list
Before presenting options, check which tools are installed and gather models:
which claude && echo "claude:yes" || echo "claude:no"
which opencode && echo "opencode:yes" || echo "opencode:no"
If neither is installed, stop with an error.
Claude Code models (hardcoded): opus, sonnet, haiku
OpenCode models (dynamic — only fetch if OpenCode is installed):
opencode models
Parse the output — each line is a model ID (e.g. openai/gpt-5.2).
Build the combined model list:
- Start with Claude models as
["Opus (Claude)","claude:opus"], ["Sonnet (Claude)","claude:sonnet"], ["Haiku (Claude)","claude:haiku"] - Append OpenCode models as
["<model_id> (OpenCode)","opencode:<model_id>"]for each model - Default selection:
claude:opus
The model value uses tool:model format so Claude can parse which tool to use from the selection.
Step 2: Choose model and handoff
Present options using AskUserQuestion:
- Model: Show numbered list of all available models (from Step 1.5) in
tool:modelformat. If a model was already parsed from natural language in Step 1, pre-select it. - Handoff: Ask whether to auto-enter handoff mode (default: yes).
Parse tool and model from the selected value by splitting on : (first part = tool, rest = model).
Step 3: Ensure handoff skill is allowed
If the user chose to enable handoff, ensure Skill(handoff) is in the worktree's .claude/settings.local.json permissions so Claude won't prompt for it:
python3 ~/.claude/skills/wksp/wksp_ops.py ensure-skill-permission --path '<RESOLVED_PATH>' --skill handoff
This reads <RESOLVED_PATH>/.claude/settings.local.json (creating it if needed), checks if Skill(handoff) is in permissions.allow, and adds it if missing. For the handoff skill, it also adds a Bash(python3 "<scripts_dir>/"*) wildcard pattern so all handoff scripts run without manual approval. Must run with dangerouslyDisableSandbox: true since the path may be outside the project.
Step 4: Pre-trust workspace
python3 ~/.claude/skills/wksp/wksp_ops.py pretrust --path '<RESOLVED_PATH>'
This writes ~/.claude.json and must run with dangerouslyDisableSandbox: true.
Step 5: Open terminal tab and launch
Use launch.py to build the shell command and open a new terminal tab. The script auto-detects iTerm2 vs Terminal.app via the bundled AppleScript.
python3 ~/.claude/skills/wksp/launch.py --path '<RESOLVED_PATH>' --tool '<TOOL>' [--model '<MODEL>'] [--handoff]
Both tools accept --model for model selection. For the handoff prompt:
- claude: positional arg is the prompt —
claude --model <M> "enter handoff no ask" - opencode: positional arg is project path, use
--prompt—opencode --model <M> --prompt "enter handoff no ask"
The script prints the generated shell command on stdout. Use this for the confirmation message.
Step 6: Confirm
Print confirmation: which path was opened, which tool, which model, and whether handoff was enabled.
More from verneagent/tiny-skills
inscribe
Capture rules, conventions, or code style guidelines into documentation files. Use when the user says "inscribe", "learn", "remember this rule", "add convention", or wants to persist coding guidelines.
4lark-share
Share a knowledge insight or message with a team via Lark group chat webhook. Use when the user says "share", "lark-share", or wants to send a formatted message to a Lark group.
4multi-gh
Fix and standardize GitHub multi-account workflows with gh account switching, SSH host aliases, and safe remote setup. Use when creating repos, pushing code, or diagnosing GitHub auth mismatches across multiple identities.
2retro
Retrospective on mistakes or new conventions — analyze patterns, find root causes, and propose deterministic prevention (static checks > lint > tests > runtime > review > docs). Use when the user says "retro", "反省", "复盘", "怎么防", "how to prevent", or wants to enforce a new convention.
2netmap
Show all network interfaces, LAN neighbors, and Tailscale peers on the current machine. Use when the user asks about network status, neighbors, connected devices, or Tailscale peers.
1genimg
Generate or edit images using OpenAI-compatible image generation APIs. Supports text-to-image (文生图) and image-to-image (图生图). Trigger when the user says /genimg, or mentions 文生图, 图生图, AI生图, 画图, generate image, image generation, or wants to create/transform images via API. Works with any provider (火山引擎, OpenAI, etc.) that exposes the standard images/generations endpoint.
1