sessionlog:tokenusage
Installation
SKILL.md
Session Token Usage
Extract and report input/output token counts from Claude Code session logs.
Steps
1. Identify session
project_dir="$HOME/.claude/projects/$(pwd | sed 's|/|-|g')"
current_session=$(ls -t "$project_dir"/*.jsonl 2>/dev/null | head -1)
session_id=$(basename "$current_session" .jsonl)
echo "Session: $session_id"
echo "Source: $current_session"
If the user specifies a session ID, use that instead of the most recent one.
2. Extract token usage
python3 -c "
import json, sys
log_file = sys.argv[1]
# Deduplicate by message ID — session logs may contain multiple
# JSONL lines per API call (streaming). Take the last usage block
# per unique message ID for accurate counts.
usage_by_msg = {}
with open(log_file) as f:
for line in f:
line = line.strip()
if not line:
continue
try:
obj = json.loads(line)
except json.JSONDecodeError:
continue
msg = obj.get('message', {})
msg_id = msg.get('id')
usage = msg.get('usage')
if msg_id and usage:
usage_by_msg[msg_id] = usage
totals = {
'input_tokens': 0,
'output_tokens': 0,
'cache_creation_input_tokens': 0,
'cache_read_input_tokens': 0,
}
for usage in usage_by_msg.values():
for key in totals:
totals[key] += usage.get(key, 0)
msg_count = len(usage_by_msg)
total_input = totals['input_tokens'] + totals['cache_creation_input_tokens'] + totals['cache_read_input_tokens']
print(f'API calls: {msg_count:>12,}')
print(f'Input tokens: {totals[\"input_tokens\"]:>12,}')
print(f'Output tokens: {totals[\"output_tokens\"]:>12,}')
print(f'Cache creation input tokens: {totals[\"cache_creation_input_tokens\"]:>12,}')
print(f'Cache read input tokens: {totals[\"cache_read_input_tokens\"]:>12,}')
print(f'Total input (all types): {total_input:>12,}')
" "$current_session"
3. Report results
Present results as a clean table:
- Session ID — the UUID
- API calls — number of unique API calls with usage data
- Input tokens — non-cached input tokens
- Output tokens — tokens generated by the model
- Cache creation tokens — input tokens written to prompt cache
- Cache read tokens — input tokens served from prompt cache
- Total input — sum of all input token types
Related skills