claude-code-usage-report
Claude Code Usage Report
Analyze Claude Code token consumption and estimate costs from session JSONL files
stored locally in ~/.claude/projects/.
When to Use
- User wants to know how many tokens they have consumed
- User asks for a cost estimate of their Claude Code usage
- Comparing API pricing vs subscription plan value
- Analyzing usage patterns across projects or time periods
Data Source
Session data is stored as JSONL files in ~/.claude/projects/<project-id>/:
- Main session files:
<project-dir>/*.jsonl - Subagent session files:
<project-dir>/*/subagents/*.jsonl
Each line is a JSON object. Token usage is at message.usage:
| Field | Description |
|---|---|
input_tokens |
Direct input tokens |
output_tokens |
Model output tokens |
cache_read_input_tokens |
Tokens read from prompt cache |
cache_creation_input_tokens |
Tokens written to prompt cache (5m or 1h TTL) |
Model identification is at message.model (contains opus, sonnet, or haiku).
Timestamps are at timestamp (ISO 8601 format).
Pricing
All pricing data (model token costs and subscription plans) is stored in
scripts/pricing.json — the single source of truth
used by the script. The file includes an updated date field so the user
can see when prices were last verified.
Prompt caching has two write tiers: 5-minute (1.25x base input) and 1-hour (2x base input). Cache reads cost 0.1x base input. The script uses the 5-minute cache write price for cost estimation because Claude Code uses ephemeral caching.
If the user asks to update pricing:
- Run
python3 <skill-path>/scripts/usage_report.py --update-pricingto display the currentpricing.jsonvalues and update the checked date - Fetch https://docs.anthropic.com/en/docs/about-claude/pricing using your web tools to get the latest prices
- Compare fetched prices with
pricing.jsonvalues - Edit
pricing.jsonwith any changed values
The report displays the pricing data date so the user knows how current the prices are.
Report Process
Step 1: Determine Scope
Parse the user's request to determine:
- Date range: "since Feb 18", "last month", "all time", "this week"
- Project filter: specific project, current project, or all projects
If the user does not specify a scope, ask them to clarify. Default to "all projects" if only a date range is given.
Step 2: Collect Data
Run the bundled script at scripts/usage_report.py to scan and aggregate session data. Resolve the script path relative to the skill installation directory.
python3 <skill-path>/scripts/usage_report.py \
[--start-date YYYY-MM-DD] \
[--end-date YYYY-MM-DD] \
[--project PROJECT_NAME] \
[--output PATH] \
[--update-pricing]
| Argument | Description | Default |
|---|---|---|
--start-date |
Include only data on or after this date | No filter |
--end-date |
Include only data on or before this date | No filter |
--project |
Project directory name or substring | All projects |
--output |
Report output file path | ~/claude-code-usage-report.txt |
--update-pricing |
Fetch pricing page and guide update of pricing.json |
— |
The script outputs the formatted report to stdout and saves it to the output file. It uses only Python stdlib (no dependencies to install).
The report always includes a Plan ROI Comparison section showing savings for all plans (Pro, Max 5x, Max 20x) against API-equivalent cost.
Step 3: Present the Report
The script produces a formatted report with these sections:
- Global Summary — active projects, session counts, total tokens, API equivalent cost
- Plan ROI Comparison — savings for Pro, Max 5x, and Max 20x plans
- Model Breakdown — per-model token counts and costs with a TOTAL row
- Cost Breakdown by Type — input/output/cache cost split per model
- Daily Usage — tokens and cost per active day
- Per-Project Breakdown — sorted by cost descending with per-model detail
Present the script output to the user. The report is also saved
automatically to the --output path (default ~/claude-code-usage-report.txt).
Inform the user of the saved file path.
Step 4: Highlight Key Insights
After the full report, add a brief summary highlighting:
- Top 5 projects by cost
- Peak usage day
- ROI vs API pricing (if plan context is known)
- Any notable patterns (e.g., one project dominating usage)
Display Guidelines
- Format large numbers with commas (e.g.,
1,234,567) - Right-align numeric columns
- Use
$X.XXformat for costs - Clean project names for readability (strip path prefixes)
- Sort projects by cost descending
- Show percentages for savings and cost distribution
Edge Cases
- Missing timestamps: Skip messages without timestamps for filtered reports
- Unknown models: Group under "other" using Sonnet pricing as fallback
- Empty sessions: Skip projects with zero tokens in the filtered period
- Old session cleanup: Claude Code may prune old session files — mention this if the user asks about older periods and data seems incomplete
- Multiple plans: If the user switched plans during the period, ask for the switch date and compute each period separately
Anti-patterns to Avoid
| Anti-pattern | Better alternative |
|---|---|
| Generating a script from scratch | Use the bundled scripts/usage_report.py |
| Making per-file tool calls | The bundled script scans everything |
| Hardcoding prices in the skill | Prices live in scripts/pricing.json |
| Ignoring subagent files | Script includes */subagents/*.jsonl |
| Showing raw directory names | Script cleans up to readable project names |
| Omitting cache tokens | Script shows all four token types |
| Guessing the date range | Ask the user if ambiguous |
More from rlespinasse/agent-skills
diataxis
Helps maintain documentation pages based on the Diataxis method. Analyzes existing docs, classifies pages into tutorials/how-to/explanation/reference categories, identifies gaps, and helps create or restructure documentation following Diataxis principles. Use when user mentions documentation structure, Diataxis, doc categories, tutorials vs how-to guides, or reorganizing docs.
40drawio-export-tools
Decision guide for the third-party Draw.io export ecosystem by @rlespinasse. Covers docker-drawio-desktop-headless (base Docker), drawio-exporter (Rust backend), drawio-export (enhanced Docker), and drawio-export-action (GitHub Actions). Use when user mentions diagram export, CI/CD automation, batch processing, or Draw.io files. Helps select the right tool based on context.
24conventional-commit
Guides committing staged (indexed) git files using the Conventional Commits specification
21pin-github-actions
Migrates GitHub Actions workflows to use pinned commit SHAs instead of
15verify-readme-features
Verifies that features listed in a README (or similar documentation) are actually
9verify-pr-logs
Checks GitHub Actions CI logs on a pull request, diagnoses failures,
7