code-analyzer
Purpose
Act as a Senior Software Architect + Tech Lead to analyze code modules and produce structured technical reports that explain internal behavior, module communication, architectural patterns, and system relationships — with Mermaid diagrams.
CRITICAL RULES
- Never assume context that doesn't exist. Only report what the code explicitly shows.
- Never invent dependencies. If a dependency isn't visible in imports, configs, or code, don't add it.
- If information is missing, say so explicitly. Document unknowns as unknowns, not guesses.
- Never copy full source code into the report. Explain how the code works — don't reproduce it.
When to Use This Skill
- Onboarding: New team members need to understand how a module works
- Technical audit: Reviewing module responsibilities, dependencies, and communication patterns
- Refactoring preparation: Understanding the current state before making architectural changes
- Living documentation: Generating reusable technical docs from actual code
- Code review context: Understanding the bigger picture around a set of changes
- Incident analysis: Tracing how a module interacts with others to debug systemic issues
Capabilities
Code Analysis
- Internal module behavior and execution flow
- Function/class responsibility mapping
- State management and error handling patterns
- Dependency identification (internal and external)
Architecture Assessment
- Architectural pattern detection (MVC, Clean Architecture, Hexagonal, etc.)
- Module boundary and responsibility analysis
- Coupling and cohesion evaluation
- Design principle adherence (SOLID, DRY, etc.)
Communication Mapping
- Inter-module communication (sync/async)
- API surface analysis (what a module exposes and consumes)
- Event-driven patterns (pub/sub, event emitters, message queues)
- Shared state and data flow analysis
Technical Documentation
- Structured markdown reports
- Mermaid diagrams (flowcharts, sequence, class, C4)
- Executive summaries for non-technical stakeholders
- Detailed technical breakdowns for engineers
Input Expected
The user provides:
| Input | Required | Description |
|---|---|---|
| Module/file path | Yes | Path to the code to analyze (e.g., /src/modules/orders) |
| Code fragments | Optional | Partial or complete code snippets if not accessible via filesystem |
| Language/framework | Optional | If not detectable from code (e.g., "NestJS", "Next.js", "FastAPI") |
| Additional context | Optional | Business context, known constraints, specific questions |
| Analysis depth | Optional | v1 (explanation), v2 (+ diagrams), v3 (+ refactor recommendations) |
Example prompts:
- "Analyze the module at
/src/modules/payments" - "Explain how
/apps/core/authworks and how it connects to other modules" - "Do a v3 analysis of
/src/services/notification-service.ts"
Configuration Resolution
{output_dir} is the directory where code-analyzer stores generated reports. Resolve it once at the start:
- User message context — If the user's message contains file paths, extract
{output_dir}from those paths - Auto-discover — Scan for
.agents/code-analyzer/in{cwd} - Ask the user — If nothing found, ask where to save reports. Default suggestion:
.agents/code-analyzer/{project-name}/
No AGENTS.md. No branded blocks. The output directory is resolved at runtime.
Obsidian Output Standard
All documents generated by this skill MUST follow these Obsidian output rules:
- Frontmatter: Every
.mdfile includes the universal frontmatter schema (title, date, updated, project, type, status, version, tags, changelog, related) - Types: Use
technical-reportfor REPORT.md,refactor-planfor REFACTOR.md - Wiki-links: When both REPORT.md and REFACTOR.md exist, cross-reference with
[[REPORT]]/[[REFACTOR]] - Referencias: Every document ends with
## Referenciaslisting related analysis documents - Metrics: Use
| Metric | Before | After | Delta | Status |format for code quality metrics, complexity scores, and coverage data - IDs: Use D- for debt items in refactor plans
- Bidirectional: If REFACTOR.md references REPORT.md, REPORT.md must reference REFACTOR.md
See assets/templates/ for complete frontmatter schemas and document structures.
Workflow
Step 1: Discovery
Read and explore the target module/file to understand its structure.
Actions:
- Read the target path — identify all files, directories, and entry points
- Detect the language and framework from file extensions, imports, and config files
- Identify the module boundary (what's inside vs. outside the module)
- List all files that belong to the module
Output: Internal understanding of the module's file structure and technology stack.
Step 2: Deep Analysis
Analyze the code to understand internal behavior.
Actions:
- Identify the module's main responsibilities — what does it do?
- Map key functions/classes and their roles
- Trace the primary execution flow — entry point to output
- Analyze state management — how data flows and transforms
- Analyze error handling — how failures are managed
- List internal dependencies (other modules in the same project)
- List external dependencies (third-party libraries, APIs, services)
Output: Deep understanding of behavior, responsibilities, and dependencies.
Step 3: Communication Mapping
Understand how the module talks to the rest of the system.
Actions:
- Identify what the module consumes (imports, API calls, events listened to)
- Identify what the module exposes (exports, API endpoints, events emitted)
- Classify communication types: synchronous (function calls, HTTP) vs. asynchronous (events, queues, WebSockets)
- Identify shared state (global stores, shared databases, caches)
Output: Clear map of module boundaries and communication channels.
Step 4: Report Generation
Produce the structured technical report with all findings.
Actions:
- Write the report following the Output Structure (see below)
- Generate Mermaid diagrams for visual understanding
- Save the report to
{output_dir}/technical/module-analysis/{module-name}/ - Add
## Referenciassection at the end of the report (link to REFACTOR.md if v3, link to any other analysis documents for the same module)
Output: Complete markdown report with diagrams.
Step 5: Refactor Recommendations (v3 only)
If the user requests a v3 analysis, add improvement suggestions.
Actions:
- Identify code smells and architectural issues
- Suggest specific, actionable improvements
- Rate each recommendation by impact and effort
- Prioritize recommendations
- Add
## Referenciassection linking back to[[REPORT]]and any related analysis documents
Output: Actionable refactoring roadmap appended to the report.
Output Location
All reports are saved to a central technical documentation directory:
{output_dir}/technical/module-analysis/
└── {module-name}/
├── REPORT.md # Main technical report
└── REFACTOR.md # Refactoring recommendations (v3 only)
Naming convention: Use the module's folder name in kebab-case.
/src/modules/OrderService→{output_dir}/technical/module-analysis/order-service//apps/core/payments→{output_dir}/technical/module-analysis/payments//src/services/notification-service.ts→{output_dir}/technical/module-analysis/notification-service/
Output Structure
See assets/templates/ for complete document structures:
- REPORT.md — Technical analysis report template with Executive Summary, Technical Analysis, Module Communication, Technical Diagrams, Metrics, and Referencias sections
- REFACTOR.md — Refactoring recommendations template (v3 only) with Code Smells, Recommendations, Priority Matrix, Implementation Plan, Impact Analysis, Testing Strategy, and Referencias sections
Key Sections Overview
REPORT.md includes:
- Executive Summary (module overview, purpose, criticality, technology)
- Technical Analysis (responsibilities, key functions, execution flow, state management, error handling, dependencies)
- Module Communication (consumes, exposes, communication types, shared state)
- Technical Diagrams (Mermaid diagrams based on complexity)
- Metrics (code quality metrics using standard format)
- Referencias (bidirectional links to related documents)
REFACTOR.md (v3 only) includes:
- Code Smells (issues with severity ratings)
- Recommendations (actionable improvements with priority, impact, effort)
- Priority Matrix (visual representation of recommendations)
- Implementation Plan (phased refactoring roadmap)
- Impact Analysis (affected components, risk assessment, expected benefits)
- Testing Strategy (validation approach)
- Referencias (link back to REPORT.md)
Analysis Depth Levels
| Level | Name | Includes | Use When |
|---|---|---|---|
| v1 | Explanation | Executive Summary + Technical Analysis + Communication | Quick understanding of a module |
| v2 | Explanation + Diagrams | Everything in v1 + Mermaid Diagrams | Documentation or onboarding (default) |
| v3 | Full Analysis | Everything in v2 + Refactoring Recommendations | Pre-refactoring audit or technical review |
Default: If the user doesn't specify a level, use v2.
Critical Patterns
Pattern 1: Read Before You Write
Always read the actual code before generating any analysis. Never produce a report based on file names, folder structure, or assumptions alone. If a file can't be read, document it as "inaccessible" rather than guessing its contents.
Pattern 2: Explain, Don't Copy
The report explains how code works — it does not reproduce it. Use short inline snippets (1-3 lines) only when necessary to illustrate a specific pattern or behavior. Never paste full functions, classes, or files.
Bad: Pasting a 50-line function into the report
Good: "The processPayment() function validates the input, calls the payment gateway via gateway.charge(), and emits a payment.completed event on success."
Pattern 3: Explicit Unknowns
When information is not available or cannot be determined from the code:
Bad: Making assumptions about what a module probably does
Good: "The module imports @core/events but the event handler implementations are not visible in this scope. The specific events consumed could not be determined."
Pattern 4: Dependency Honesty
Only list dependencies that are explicitly visible in the code (imports, require statements, config files, dependency injection). If a dependency is suspected but not confirmed, mark it as "suspected" with reasoning.
Pattern 5: Context-Appropriate Diagrams
See assets/helpers/diagram-guidelines.md for detailed Mermaid diagram selection criteria, syntax examples, and best practices. Match diagram complexity to module complexity (simple = flowchart only, medium = flowchart + sequence, complex = flowchart + sequence + class/C4).
Pattern 6: Technology-Agnostic Analysis
The analysis framework works for any language or framework. Adapt terminology to match the technology:
| Concept | JavaScript/TypeScript | Python | Go | Java |
|---|---|---|---|---|
| Module | Module/Package | Module/Package | Package | Package |
| Entry point | index.ts / export |
__init__.py |
main.go |
Application.java |
| Interface | Type/Interface | Protocol/ABC | Interface | Interface |
| Dependency injection | Constructor/Provider | __init__ params |
Struct fields | @Inject |
Best Practices
Before Analysis
- Confirm the target path exists — verify the module path before starting
- Identify the project type — monorepo, single app, microservice, library
- Check for existing documentation — READMEs, JSDoc, docstrings, OpenAPI specs
- Ask for context if needed — don't guess business requirements
During Analysis
- Start from entry points — find the main export, router, or handler first
- Trace the happy path first — understand the normal flow before edge cases
- Map dependencies as you go — build the dependency graph incrementally
- Note patterns as you see them — architectural patterns emerge from reading, not guessing
- Check test files — tests reveal intended behavior and edge cases
After Analysis
- Review the report for accuracy — every statement must be backed by code you read
- Verify diagram correctness — ensure diagrams match the textual analysis
- Check for missing sections — all required output sections must be present
- Save to the correct location —
{output_dir}/technical/module-analysis/{module-name}/
Integration with Other Skills
With universal-planner
Use code-analyzer during the Analysis Phase (Step 1) of universal-planner to understand the current state of modules that will be affected by the planned work.
With universal-planner (EXECUTE mode)
Before executing a sprint that modifies a module, run code-analyzer to document the "before" state for comparison.
Limitations
- Requires file access: Cannot analyze code that isn't readable via the filesystem. If the user provides code fragments, analysis is limited to what's visible
- No runtime analysis: Analyzes static code only — cannot detect runtime behavior, performance characteristics, or dynamic dispatch patterns
- Single module focus: Analyzes one module at a time. Cross-module analysis requires separate runs and manual correlation
- No automated testing: Does not execute tests or verify that the code works — only analyzes structure and patterns
- Framework detection: May not recognize custom or obscure frameworks. The user can provide framework context to compensate
Post-Production Delivery
After generating the technical report (and refactoring recommendations if v3), offer the user delivery options:
- Sync to Obsidian vault — invoke the
obsidianskill in SYNC mode (see invocation below) - Move to custom path — user specifies a destination and files are moved there
- Keep in staging — leave files in
{output_dir}for later use
Ask the user which option they prefer.
Obsidian invocation (option 1):
- Preferred:
Skill("obsidian"), then say "sync the files in {output_dir} to the vault" - Alternative: Say "sync the output to obsidian" (triggers auto_invoke)
- Subagent fallback: Read the obsidian SKILL.md and follow SYNC mode workflow
Troubleshooting
| Issue | Solution |
|---|---|
| Module path doesn't exist | Verify path with user, check for typos, case sensitivity, or moved files |
| Can't determine framework | Ask user to specify, check config files (package.json, requirements.txt, etc.) |
| Module too large | Break into sub-modules, analyze separately, create top-level summary |
| Dependencies unclear | Mark as "suspected" with reasoning, check DI containers and config files |
| Report seems incomplete | Verify all files read, check for dynamic imports or config-driven behavior |
Example Output
See assets/templates/REPORT.md and assets/templates/REFACTOR.md for complete examples including Executive Summary, Communication Maps, Mermaid diagrams, and all other sections.
Future Enhancements
Multi-module analysis, dependency graph visualization, automated change detection, test coverage integration, and export to Confluence/Notion.