discover-codebase-enhancements
Discover Codebase Enhancements
Overview
Spend significant time crawling and analyzing the codebase to surface high-impact improvements. Center findings on the jobs-to-be-done of the codebase, developers, end users, and AI agents working in the repo.
Inputs (ask if missing, max 5)
- Target area or scope (whole repo or specific modules)
- Primary user jobs-to-be-done and business goals
- Known pain points or incidents
- Constraints (time, risk tolerance, release window)
- Evidence sources allowed (tests, metrics, logs)
Jobs-to-Be-Done Lens
- Codebase: reliability, simplicity, maintainability
- Developers: speed, clarity, safe changes
- End users: correctness, performance, usability
- AI agents: discoverability, consistency, explicit patterns
Workflow
- Deep crawl
- Read architecture docs, READMEs, key modules, and tests.
- Search for hotspots (TODO/FIXME, large files, duplication, complex flows).
- Evidence gathering
- Note error-prone areas, missing tests, performance risks, and coupling.
- Capture references to files/functions and concrete symptoms.
- Opportunity synthesis
- Group findings by theme: correctness, performance, DX, architecture, tests, tooling.
- Impact scoring
- Rate impact, effort, risk, and evidence strength.
- Ranked recommendations
- Present top enhancements with rationale and expected outcomes.
Output Format
## Codebase Enhancement Discovery
### Context Summary
[1-3 sentences]
### JTBD Summary
- Codebase: ...
- Developers: ...
- End users: ...
- AI agents: ...
### Evidence Sources
- Files/modules reviewed: ...
- Patterns searched: ...
- Tests or metrics considered: ...
### Ranked Enhancements
1) [Enhancement]
- Category: ...
- Impact: high | Effort: medium | Risk: low | Evidence: moderate
- Rationale: ...
- Affected areas: ...
### Quick Wins
- ...
### Open Questions
- ...
Quick Reference
- Spend more time exploring than feels necessary.
- Prefer evidence-backed findings over speculation.
- Center recommendations on user and developer outcomes.
Common Mistakes
- Skimming without enough code context
- Listing fixes without evidence or impact scoring
- Ignoring AI agent or developer workflows
- Recommending changes that fight existing architecture
More from kasperjunge/agent-resources
code-review
Use when reviewing code changes before committing, after implementing features, or when asked to review. Triggers on staged changes, PR reviews, or explicit review requests.
15brainstorm-solutions
Generate multiple viable solution options after research is complete, before converging on a single approach. Use when you need to explore the solution space, ask clarifying questions, and produce 3-5 distinct options to consider.
15commit-work
Use when work is complete and ready to commit. Triggers after code review passes, when asked to "commit", "save this", or "wrap up". Runs quality checks, updates changelog, creates commit.
14design-solution
Converge on a single recommended solution after brainstorming options. Use when you have multiple candidate approaches and need to analyze trade-offs, select one, and define decision criteria before planning.
14refactor-for-determinism
Design or refactor skills by separating deterministic and non-deterministic steps. Use when creating or improving skills, especially to move repeatable workflows into scripts/ and update SKILL.md to call them.
14research-codebase
Research a task, problem, bug, or feature by exploring the codebase. Use when starting new work, encountering bugs, or needing to understand how existing implementation relates to a task. Triggers on "research", "investigate", "look into", or requests to understand implementation before making changes.
14