rust-profiling
Rust Profiling with Samply
Profile Rust binaries to find CPU bottlenecks using samply.
Quick Start
# 1. Ensure profiling profile exists in Cargo.toml (see reference.md)
# 2. Build with debug symbols
cargo build --profile profiling
# 3. Profile (opens Firefox Profiler UI)
samply record ./target/profiling/<binary> [args...]
# 4. Or save for CLI analysis
samply record --save-only -o profile.json ./target/profiling/<binary>
python3 ~/.agents/skills/rust-profiling/scripts/analyze_profile.py profile.json
Skill Files
| File | Purpose |
|---|---|
reference.md |
Cargo.toml setup, samply options, troubleshooting |
examples.md |
Common profiling scenarios and analysis patterns |
scripts/analyze_profile.py |
CLI tool to analyze saved profile.json files |
When to Use
- Performance is slower than expected
- Before optimizing (measure first!)
- After optimization (verify improvement)
- Investigating CPU-bound operations
What to Look For
| Pattern | Meaning | Action |
|---|---|---|
| High self-time | Function itself is slow | Direct optimization target |
| High total-time | Called often or slow callees | Check call frequency |
malloc/alloc in hot path |
Allocation overhead | Pool, arena, or stack allocate |
pthread_mutex/parking_lot |
Lock contention | Reduce lock scope or use lock-free |
More from blacktop/dotfiles
ratatui-tui
|
117code-simplifier
Simplifies and refines code for clarity, consistency, and maintainability while preserving all functionality. Focuses on recently modified code unless instructed otherwise.
3handoff
Generate optimized handoff prompts for delegating work to another LLM agent. Use when handing work to GPT-5.x/Codex, Claude 4.x, Gemini 3.x, or Grok 4.x, either as a shared-workspace sub-task handoff or a fresh-context handoff for a new session or model. Triggers on requests like "create a handoff prompt", "delegate this task to another agent", "hand this off", or "prepare context for another agent".
1go-performance
Measure and improve Go program performance using current Go 1.26-era workflow. Use when profiling Go code, diagnosing CPU or memory bottlenecks, investigating latency or contention, writing or fixing benchmarks, comparing benchmark results, using pprof or trace data, applying PGO, or tuning hot-path Go code.
1second-opinion
Run an external LLM code review with Codex CLI, Gemini CLI, or both. Use when the user asks for a second opinion, external review, Codex review, Gemini review, or wants a model-vs-model review of current changes, a branch diff, a specific commit, or a GitHub pull request.
1humanizer
|
1