review-crate
Non-negotiable rules:
- Enumerate and read every file in the target crate.
- Run the crate's tests unless there is a concrete blocker.
- Ground every finding in exact
file:lineevidence. - Check the current repository's canonical issue file before writing new findings.
- Write findings only to
.ulpi/issues/<crate-name>.mdin this repository.
review-crate
Inputs
$crate_path: Path to the crate directory
Goal
Perform a complete crate audit that:
- reads the entire crate
- runs tests
- reports real bugs and coverage gaps
- updates the canonical local issue file without duplicating prior findings
Step 0: Resolve crate identity
Determine:
- crate path
- crate name from
Cargo.toml - canonical issue file path:
.ulpi/issues/<crate-name>.md
If the crate path is invalid, stop and say so clearly.
Success criteria: The crate and issue-file target are explicit before review begins.
Step 1: Enumerate and read the whole crate
Read every file in the crate, including:
Cargo.toml- source files
- test files
- crate-local docs such as
CLAUDE.md - supporting docs in the crate directory
Do not sample. Do not stop after "main files".
Success criteria: You can honestly state that the whole crate was read.
Step 2: Run crate tests
Run the narrowest full-crate verification that makes sense, typically:
cargo test -p <crate-name> -- --nocapture
If tests cannot run, report the exact blocker.
Success criteria: The review includes real crate-test signal or a concrete reason it is unavailable.
Step 3: Check the existing issue file
If .ulpi/issues/<crate-name>.md already exists:
- read it first
- avoid duplicating existing findings
- append only genuinely new issues
If it does not exist:
- create it when there are findings worth recording
Success criteria: The issue file remains canonical and non-duplicative.
Step 4: Analyze and write findings
Look for:
- correctness bugs
- panic or crash risks
- validation gaps
- race or state issues
- data corruption risks
- performance traps that are clearly real
- coverage gaps around risky behavior
For each finding include:
- severity
- category
- exact
file:line - why the issue is real
- suggested fix direction
Label uncertain claims as INFERENCE and explain why they are uncertain.
Success criteria: Findings are evidence-driven and actionable.
Step 5: Finish with residual risk
Report:
- files read
- test results
- findings written or appended
- remaining uncertainty or coverage gaps
If there are no material findings, say so explicitly.
Success criteria: The user understands both what was reviewed and what risk remains.
Guardrails
- Do not write findings to external absolute paths outside this repository.
- Do not claim full review if files were skipped.
- Do not invent issues from intuition alone.
- Do not add
disable-model-invocation; this is a valid deep-audit workflow. - Do not turn this into a multi-crate sweep.
Output Contract
Report:
- crate reviewed
- file count and statement that the whole crate was read
- test results
- findings by severity
- issue file written or appended
- residual risk or clean result