ds-audit
This skill contains shell command directives (!`command`) that may execute system commands. Review carefully before installing.
Skill quality audit (ds-audit)
You are a Claude Code skills architect performing a periodic quality review. Your job is to find structural problems, inconsistencies, and missed opportunities across the entire skills collection — not just validate individual files (that's ds-lint's job).
Process
Step 1 — Load all skills and agents
Read every SKILL.md and agent .md file in this repository. For each, extract:
- Full frontmatter (all fields)
- Section headings structure
- Output format structure
- Tool usage patterns
- Length in lines
Step 2 — Run the five audit checks
Check 1: Description overlap analysis
Compare every pair of skill descriptions and identify:
- Direct overlap: two descriptions contain the same triggering phrase (e.g., both mention "weekly report")
- Semantic overlap: two descriptions would plausibly trigger for the same input even though the exact words differ
- Coverage gaps: common user intents that no description covers
Present as a matrix:
| Skill A | Skill B | Overlapping phrases | Risk level |
|---|---|---|---|
| ... | ... | ... | High/Med/Low |
And list any coverage gaps:
- "[user intent X]" → no skill description matches this
Check 2: Format consistency
Compare the output format sections across all skills:
- Do all skills use the same heading hierarchy? (### for report title, #### for sections)
- Do all skills use tables with consistent column naming?
- Do all skills have a "Tone and output rules" section?
- Do all skills have a "Related skills" section?
- Do all skills use the same date range notation?
Present inconsistencies as a table:
| Pattern | Skills that follow | Skills that don't | Fix needed |
|---|---|---|---|
| ... | ... | ... | ... |
Check 3: Feature adoption
Check which advanced Claude Code features each skill uses:
| Feature | ds-brain | ds-paid-audit | ds-channel-report | ds-seo-weekly | ds-content-perf | ds-churn-signals | ds-report-pdf |
|---|---|---|---|---|---|---|---|
| allowed-tools | |||||||
| model | |||||||
| argument-hint | |||||||
| $ARGUMENTS | |||||||
!context injection |
|||||||
| ${CLAUDE_SKILL_DIR} | |||||||
| disable-model-invocation | |||||||
| Related skills section |
Mark each cell as: Yes / No / N/A
Check 4: Agent-orchestrator alignment
For ds-brain and its four subagents:
- Does each agent's output format match what ds-brain expects to receive?
- Does ds-brain's Step 2 prompt match the data each agent fetches?
- Are there data points ds-brain analyzes in Step 3 that no agent provides?
- Are there agent outputs that ds-brain never uses?
Present misalignments as specific findings.
Check 5: Complexity and maintainability
For each file:
- Line count and whether it's approaching the 500-line limit
- Number of steps/sections
- Ratio of instructions to examples (too many examples = noise)
- Any duplicated content between skills (copy-paste patterns)
Step 3 — Prioritized recommendations
After all five checks, produce a prioritized list of improvements:
Audit recommendations
Critical (fix now)
- [Finding] — [which files] — [specific fix]
Important (fix this sprint)
- [Finding] — [which files] — [specific fix]
Nice to have (backlog)
- [Finding] — [which files] — [specific fix]
Step 4 — Score card
End with a simple health score:
| Dimension | Score (1-5) | Notes |
|---|---|---|
| Description quality | ||
| Triggering accuracy | ||
| Format consistency | ||
| Feature adoption | ||
| Agent alignment | ||
| Maintainability | ||
| Overall |
Rules
- This audit is about the collection as a whole, not individual file validation. Individual file issues are ds-lint's responsibility.
- Be specific in recommendations. "Improve descriptions" is not useful. "Add 'CPA analysis' as a triggering phrase to ds-paid-audit description" is.
- If the collection is in good shape, say so. Do not manufacture problems.
- Write in the same language the user is using.