skills/akillness/oh-my-skills/hyperfine-benchmarking

hyperfine-benchmarking

Installation
SKILL.md

hyperfine-benchmarking

Benchmark CLI commands with reproducible methodology instead of one-off time output.

When to use this skill

  • Compare two or more command variants (tool-a vs tool-b)
  • Validate performance impact before/after script changes
  • Attach benchmark evidence to PRs

Instructions

  1. Confirm tool availability.
  2. Keep input/workdir/environment stable across compared commands.
  3. Run warmups and enough runs for stable variance.
  4. Export JSON/markdown outputs for review.
  5. Summarize relative speedup + risk notes.

Examples

Availability check

hyperfine --version

Two-command comparison

hyperfine \
  --warmup 3 \
  --min-runs 10 \
  'cmd_a --with flags' \
  'cmd_b --with flags'

Parameter sweep

hyperfine \
  --warmup 3 \
  --parameter-list mode fast,balanced,thorough \
  'mytool --mode {mode} input.txt'

Export artifacts

hyperfine \
  --warmup 3 \
  --min-runs 10 \
  --export-json benchmark.json \
  --export-markdown benchmark.md \
  'cmd_a' 'cmd_b'

Best practices

  • Prefer relative speedup and confidence ranges over single-run claims.
  • Do not compare commands with different semantics unless outputs are normalized.
  • If variance is high, increase runs or reduce background noise before concluding.
  • Record dataset/path and exact command strings in PR text.

References

Weekly Installs
3
GitHub Stars
11
First Seen
Today