disk-space-cleanup
Disk Space Cleanup
Reclaim disk space with a safety-first workflow: investigate first, run obvious low-risk cleanup wins, then do targeted analysis for larger opportunities.
Execution Default
- Start with non-destructive investigation and quick sizing.
- Prioritize easy wins first (
nix-collect-garbage, container prune, Cargo artifacts). - Propose destructive actions with expected impact before running them.
- Run destructive actions only after confirmation, unless the user explicitly requests immediate execution of obvious wins.
- Capture new reusable findings by updating this skill before finishing.
Workflow
- Establish current pressure and biggest filesystems
- Run easy cleanup wins
- Sweep Rust build artifacts in common project roots
- Investigate remaining heavy directories with
ncdu/du - Investigate
/nix/storeroots when large toolchains still persist - Summarize reclaimed space and next candidate actions
- Record new machine-specific ignore paths or cleanup patterns in this skill
Step 1: Baseline
Run a quick baseline before deleting anything:
df -h /
df -h /home
df -h /nix
Optionally add a quick home-level size snapshot:
du -xh --max-depth=1 "$HOME" 2>/dev/null | sort -h
Step 2: Easy Wins
Use these first when the user wants fast, low-effort reclaiming:
sudo -n nix-collect-garbage -d
sudo -n docker system prune -a
sudo -n podman system prune -a
Notes:
- Add
--volumesonly when the user approves deleting unused volumes. - Re-check free space after each command to show impact.
- Prefer
sudo -nfirst so cleanup runs fail fast instead of hanging on password prompts. - If root is still tight after these, run app cache cleaners before proposing raw
rm -rf:
uv cache clean
pip cache purge
yarn cache clean
npm cache clean --force
Step 3: Rust Build Artifact Cleanup
Target common roots first: ~/Projects and ~/code.
Use cargo-sweep in dry-run mode before deleting:
nix run nixpkgs#cargo-sweep -- sweep -d -r -t 30 ~/Projects ~/code
Then perform deletion:
nix run nixpkgs#cargo-sweep -- sweep -r -t 30 ~/Projects ~/code
Alternative for toolchain churn cleanup:
nix run nixpkgs#cargo-sweep -- sweep -r -i ~/Projects ~/code
Recommended sequence:
- Run
-t 30first for age-based stale builds. - Run a dry-run with
-inext. - Apply
-iwhen dry-run shows significant reclaimable space.
Step 4: Investigation with ncdu and du
Avoid mounted or remote filesystems when profiling space. Load ignore patterns from references/ignore-paths.md.
Use one-filesystem scans to avoid crossing mounts:
ncdu -x "$HOME"
sudo ncdu -x /
When excluding known noisy mountpoints:
ncdu -x --exclude "$HOME/keybase" "$HOME"
sudo ncdu -x --exclude /keybase --exclude /var/lib/railbird /
If ncdu is missing, use:
nix run nixpkgs#ncdu -- -x "$HOME"
For quick, non-blocking triage on very large trees, prefer bounded probes:
timeout 30s du -xh --max-depth=1 "$HOME/.cache" 2>/dev/null | sort -h
timeout 30s du -xh --max-depth=1 "$HOME/.local/share" 2>/dev/null | sort -h
Machine-specific heavy hitters seen in practice:
~/.cache/uvcan exceed 20G and is reclaimable withuv cache clean.~/.cache/spotifycan exceed 10G; treat as optional app-cache cleanup.~/.local/share/Trashcan exceed several GB; empty only with user approval.
Step 5: /nix/store Deep Dive
When /nix/store is still large after GC, inspect root causes instead of deleting random paths.
Useful commands:
nix path-info -Sh /nix/store/* 2>/dev/null | sort -h | tail -n 50
nix-store --gc --print-roots
Avoid du -sh /nix/store as a first diagnostic; it can be very slow on large stores.
For repeated GHC/Rust toolchain copies:
nix path-info -Sh /nix/store/* 2>/dev/null | rg '(ghc|rustc|rust-std|cargo)'
nix-store --gc --print-roots | rg '(ghc|rust)'
Resolve why a path is retained:
/home/imalison/dotfiles/dotfiles/lib/functions/find_store_path_gc_roots /nix/store/<store-path>
nix why-depends <consumer-store-path> <dependency-store-path>
Common retention pattern on this machine:
- Many
.direnv/flake-profile-*symlinks under~/Projectsand worktrees keepnix-shell-env/ghc-shell-*roots alive. find_store_path_gc_rootsis especially useful for proving GHC retention: many largeghc-9.10.3-with-packagespaths are unique per project, while the baseghc-9.10.3and docs paths are shared.- Quantify before acting:
find ~/Projects -type l -path '*/.direnv/flake-profile-*' | wc -l
find ~/Projects -type d -name .direnv | wc -l
nix-store --gc --print-roots | rg '/\\.direnv/flake-profile-' | awk -F' -> ' '{print $1 \"|\" $2}' \
| while IFS='|' read -r root target; do \
nix-store -qR \"$target\" | rg '^/nix/store/.+-ghc-[0-9]'; \
done | sort | uniq -c | sort -nr | head
- If counts are high and the projects are inactive, propose targeted
.direnvcleanup for user confirmation.
Safety Rules
- Do not delete user files directly unless explicitly requested.
- Prefer cleanup tools that understand ownership/metadata (
nix,docker,podman,cargo-sweep) overrm -rf. - Present a concise “proposed actions” list before high-impact deletes.
- If uncertain whether data is needed, stop at investigation and ask.
Learning Loop (Required)
Treat this skill as a living playbook.
After each disk cleanup task:
- Add newly discovered mountpoints or directories to ignore in
references/ignore-paths.md. - Add validated command patterns or caveats discovered during the run to this
SKILL.md. - Keep instructions practical and machine-specific; remove stale guidance.