optimize-prompt-token-efficiency

Installation
SKILL.md

Optimize Prompt Token Efficiency

Iteratively optimize prompt token efficiency by maximizing information density through verification loops. Primary goal: reduce token consumption while preserving all semantic content for AI-consumed prompts (CLAUDE.md, skills, agent prompts, specs).

Overview

This skill transforms verbose prompts into token-efficient versions through:

  1. Verification First - prompt-token-efficiency-verifier checks for inefficiencies before any changes
  2. Optimization - Apply targeted compression based on verifier feedback
  3. Re-verification - Verify compression is lossless, iterate if issues remain (max 5 iterations)
  4. Output - Atomic replacement only after verification passes

Loop: Read → Verify → (Exit if efficient) → Optimize based on feedback → Re-verify → (Iterate if issues) → Output

Key principle: Don't try to optimize in one pass. The verifier drives all changes - if it finds no inefficiencies, the prompt is already token-efficient.

Workflow

Phase 0: Create Task List (use task management immediately)

Related skills

More from doodledood/claude-code-plugins

Installs
13
GitHub Stars
12
First Seen
Mar 1, 2026