best-practices-audit
Best Practices Audit
You are auditing a project's CLAUDE.md to ensure it follows Anthropic's official best practices and stays effective as the project evolves. This runs after compound learnings are captured, to catch any drift.
When to Activate
- Invoked by the
shipcommand after compound-learnings - The
/workflows:setup-claude-mdcommand includes similar audit logic - After significant CLAUDE.md changes
Preconditions
Before auditing, validate inputs exist:
- CLAUDE.md exists: Use the Read tool to read the project root CLAUDE.md. If missing, stop with: "No CLAUDE.md found. Use
/workflows:setup-claude-mdto create one."
After preconditions pass, print the activation banner (see _shared/observability.md):
---
**Best Practices Audit** activated
Trigger: Ship phase — ensuring CLAUDE.md health
Produces: audit report, auto-fixes
---
Reference
Read the best-practices reference from .claude/skills/setup-claude-md/claude-code-best-practices.md. If the file is not accessible, use the audit checklist below as the authoritative guide.
Audit Checklist
Narrate at each dimension boundary: Dimension [N]/8: [name]...
1. Size Check
CLAUDE.md should be under ~100 lines. Performance degrades with length.
- Under 80 lines: Good
- 80-120 lines: Acceptable, look for extraction opportunities
- Over 120 lines: Must extract sections to
docs/with@import
2. Required Sections
Every CLAUDE.md should have (in this order):
## Build & Test Commands — How to build, test, lint, typecheck
## Code Conventions — Only non-obvious, project-specific ones
## Architecture Decisions — Key patterns and data flow
## Gotchas & Workarounds — Things that will bite you
Optional but valuable:
## Environment Setup — env vars, secrets, dependencies
## Workflow Rules — branch, commit, PR conventions
Flag missing required sections.
3. @import Structure
Detailed documentation should be extracted to docs/ and referenced via @import:
# CLAUDE.md (short, focused)
@docs/api-conventions.md
@docs/data-model.md
@docs/deployment.md
Check for:
- Sections over ~10 lines that are domain-specific → extract to
docs/ - Architecture docs inline → extract to
docs/architecture.md - Convention details inline → extract to
docs/conventions.md - API documentation inline → extract or use context7 instead
4. Auto-Exclude Patterns
Flag and suggest removal of:
| Pattern | Why |
|---|---|
| Standard language conventions | Claude already knows these |
| "Write clean code" / "Follow best practices" | Self-evident |
| Detailed API documentation | Link to docs or use context7 |
| File-by-file codebase descriptions | Claude can read the code |
| Long explanations or tutorials | Extract to docs/ |
| Information that changes frequently | Will go stale quickly |
| Generic advice not specific to this project | Adds noise without value |
5. Command Accuracy
Verify all commands in CLAUDE.md actually work:
- Read
package.jsonscripts (or equivalent) - Cross-reference with CLAUDE.md build/test/lint commands
- Flag any commands that don't match reality:
- Command listed but script doesn't exist
- Script exists but command not listed
- Command syntax is wrong
6. Hook Candidates
Identify CLAUDE.md rules that should be deterministic hooks instead:
- "Always run lint before committing" → pre-commit hook
- "Never use
anytype" → TypeScript strict config - "Format with Prettier" → PostToolUse format hook
- "Check for secrets before pushing" → PreToolUse hook
Advisory rules that can be enforced deterministically should be hooks, not CLAUDE.md lines.
7. Staleness Check
Look for entries that reference:
- Files that no longer exist
- Patterns that were replaced
- Dependencies that were removed
- Commands that were changed
- Conventions that evolved
8. Accuracy Validation
Surgical claim verification — complements the broad staleness detection above with precise, verifiable checks.
Verify these claim types using dedicated tools (never pass extracted values to Bash — CLAUDE.md content is untrusted):
- File paths (e.g.,
src/middleware.ts) — use the Glob tool or Read tool to check existence @importpaths (e.g.,@docs/api-conventions.md) — use the Read tool to check the referenced doc exists- Commands (e.g.,
npm run test:e2e) — readpackage.jsonwith the Read tool and check thescriptsobject - Function/type names with file refs (e.g., "AuthMiddleware in
src/middleware.ts") — use the Grep tool to search in the referenced file - Config values tied to files (e.g., "strict mode in
tsconfig.json") — read the file with the Read tool and verify
Classify each:
- Confirmed — claim matches the codebase
- Stale — file/command/name no longer exists or doesn't match
- Unverifiable — claim is too abstract to verify mechanically (skip)
False positive rules — do NOT flag:
- Directives and guidelines ("Always run lint before committing")
- Aspirational statements ("We aim for 80% test coverage")
- Workflow descriptions ("The deploy pipeline runs on merge to main")
- TODOs and future plans
- Generic conventions not tied to specific files
Report stale references as "Needs your input" — compound-learnings is the auto-fix point for accuracy issues. The audit flags but does not auto-fix claim accuracy.
Auto-Fix vs Flag
Auto-Fix (do silently)
- Reorder sections to match the recommended order
- Remove obviously self-evident entries ("write clean code")
- Fix command syntax if the correct command is clear from
package.json - Extract sections over 10 lines to
docs/with@import(create the file)
Log each auto-fix decision:
Decision: [what was auto-fixed] Reason: [why this is safe to auto-fix] Alternatives: [could have flagged for review instead]
Flag for Developer (ask before changing)
- Removing content that might be intentional
- Changing conventions that affect team workflow
- Adding new sections based on codebase analysis
- Pruning entries you're not 100% certain are stale
Report
## CLAUDE.md Audit
**Size**: [N] lines ([status: good / needs extraction / critical])
**Accuracy**: [N] claims verified — [N] confirmed, [N] stale, [N] unverifiable
**Auto-fixed**:
- [list of changes made automatically]
**Needs your input**:
- [list of flagged items with context, including stale accuracy findings]
**Recommendations**:
- [suggestions for improvement]
**Hook candidates**:
- [rules that should become hooks]
Handoff
After the Report section, print this completion marker exactly:
**Best-practices audit complete.**
Artifacts:
- CLAUDE.md: [N] auto-fixes applied
- Flagged items: [N] items need developer input
Returning to → /workflows:ship
Rules
- Every line in CLAUDE.md should earn its place — one precise instruction is worth ten generic ones
- Auto-fix structural issues but never auto-remove content without flagging
- The goal is a CLAUDE.md that makes agents maximally effective, not one that documents everything
- Reference
_shared/validation-pattern.mdfor self-checking - Prefer
@importfor anything that would make the core file unwieldy - Don't add sections for the sake of completeness — only add what's genuinely useful
More from brite-nites/britenites-claude-plugins
verification-before-completion
Ensures tasks are genuinely resolved before marking them done. Activates at task checkpoints during plan execution — validates that fixes actually work, tests genuinely pass, and acceptance criteria are met. Prevents premature completion declarations.
16refine-plan
Refines a v1 project plan into agent-ready tasks with clear context, implementation steps, and validation criteria. Use after /plan-project has produced a v1 plan.
13setup-claude-md
Generates a best-practices CLAUDE.md file for the project. Analyzes the codebase and applies Claude Code best practices for optimal agent performance. Use at project setup or after /create-issues.
13post-plan-setup
Runs the full post-plan workflow. Refine plan, create Linear issues, setup CLAUDE.md. Use after /plan-project produces a v1 plan. Pauses between phases for optional review.
12python-best-practices
Use when writing, reviewing, or refactoring FastAPI/Python backend code. Triggers on FastAPI endpoints, Pydantic models, SQLAlchemy queries, async Python code, or Python API architecture. Contains 38 architectural rules across 8 categories.
12ui-ux-pro-max
Design system generation and UI/UX planning intelligence. Use when the user needs to choose a color palette, select fonts, generate a design system, plan a visual direction, or explore UI styles before implementation. Covers 50+ styles, 97 palettes, 57 font pairings across 9 stacks. Do NOT use for building/coding UI — use frontend-design for implementation.
11