tier

Installation
SKILL.md

Tier

Use this skill when the user wants to rate the current coding tool with /tier.

Examples:

  • /tier s
  • /tier a clear, direct, strong patch loop
  • /tier meh
  • /tier bad broke the workflow

Workflow

  1. Parse the command as /tier <rating> [comment...].
  2. Normalize <rating> using:
    • s -> S
    • a, good -> A
    • b -> B
    • c, meh -> C
    • d -> D
    • f, bad -> F
  3. Run the bundled vote runner:
python3 scripts/cast_vote.py a clear, direct, strong patch loop
  1. The vote runner detects context first.
    • In Codex, it reads only model and model_reasoning_effort from ~/.codex/config.toml.
    • Outside Codex, it does not read ~/.codex/config.toml.
    • It uses the detected tool_slug as the primary vote target when present.
    • If the tool is unknown but the model is known, it falls back to the model slug.
    • If neither can be determined, it exits with an error instead of guessing.
  2. Use --dry-run --json when you want to inspect the exact request before sending it:
python3 scripts/cast_vote.py --dry-run --json s strong edits
  1. Give a short confirmation with:
    • whether the vote was counted or over_quota
    • which targets were counted
    • the current tier and total votes for the primary item

Response Handling

  • If the API returns 201, summarize the result briefly.
  • If the API returns 429, tell the user voting is rate-limited.
  • If the API returns 400, surface the validation error directly.

Notes

  • The backend deduplicates targets, stores one logical vote, and can count it for both tool and model.
  • Unknown model slugs are still useful as source metadata, but only known catalog items are counted publicly.
  • When the detector returns model_reasoning_effort, include it only in local confirmation text for now. The production vote payload does not store it yet.
Weekly Installs
3
First Seen
3 days ago
Installed on
amp3
cline3
opencode3
cursor3
kimi-cli3
warp3