code-analyzer

SKILL.md

Purpose

Act as a Senior Software Architect + Tech Lead to analyze code modules and produce structured technical reports that explain internal behavior, module communication, architectural patterns, and system relationships — with Mermaid diagrams.

CRITICAL RULES

  1. Never assume context that doesn't exist. Only report what the code explicitly shows.
  2. Never invent dependencies. If a dependency isn't visible in imports, configs, or code, don't add it.
  3. If information is missing, say so explicitly. Document unknowns as unknowns, not guesses.
  4. Never copy full source code into the report. Explain how the code works — don't reproduce it.

When to Use This Skill

  • Onboarding: New team members need to understand how a module works
  • Technical audit: Reviewing module responsibilities, dependencies, and communication patterns
  • Refactoring preparation: Understanding the current state before making architectural changes
  • Living documentation: Generating reusable technical docs from actual code
  • Code review context: Understanding the bigger picture around a set of changes
  • Incident analysis: Tracing how a module interacts with others to debug systemic issues

Capabilities

Code Analysis

  • Internal module behavior and execution flow
  • Function/class responsibility mapping
  • State management and error handling patterns
  • Dependency identification (internal and external)

Architecture Assessment

  • Architectural pattern detection (MVC, Clean Architecture, Hexagonal, etc.)
  • Module boundary and responsibility analysis
  • Coupling and cohesion evaluation
  • Design principle adherence (SOLID, DRY, etc.)

Communication Mapping

  • Inter-module communication (sync/async)
  • API surface analysis (what a module exposes and consumes)
  • Event-driven patterns (pub/sub, event emitters, message queues)
  • Shared state and data flow analysis

Technical Documentation

  • Structured markdown reports
  • Mermaid diagrams (flowcharts, sequence, class, C4)
  • Executive summaries for non-technical stakeholders
  • Detailed technical breakdowns for engineers

Input Expected

The user provides:

Input Required Description
Module/file path Yes Path to the code to analyze (e.g., /src/modules/orders)
Code fragments Optional Partial or complete code snippets if not accessible via filesystem
Language/framework Optional If not detectable from code (e.g., "NestJS", "Next.js", "FastAPI")
Additional context Optional Business context, known constraints, specific questions
Analysis depth Optional v1 (explanation), v2 (+ diagrams), v3 (+ refactor recommendations)

Example prompts:

  • "Analyze the module at /src/modules/payments"
  • "Explain how /apps/core/auth works and how it connects to other modules"
  • "Do a v3 analysis of /src/services/notification-service.ts"

Configuration Resolution

{output_dir} is the directory where code-analyzer stores generated reports. Resolve it once at the start:

  1. User message context — If the user's message contains file paths, extract {output_dir} from those paths
  2. Auto-discover — Scan for .agents/code-analyzer/ in {cwd}
  3. Ask the user — If nothing found, ask where to save reports. Default suggestion: .agents/code-analyzer/{project-name}/

No AGENTS.md. No branded blocks. The output directory is resolved at runtime.

Obsidian Output Standard

All documents generated by this skill MUST follow these Obsidian output rules:

  1. Frontmatter: Every .md file includes the universal frontmatter schema (title, date, updated, project, type, status, version, tags, changelog, related)
  2. Types: Use technical-report for REPORT.md, refactor-plan for REFACTOR.md
  3. Wiki-links: When both REPORT.md and REFACTOR.md exist, cross-reference with [[REPORT]] / [[REFACTOR]]
  4. Referencias: Every document ends with ## Referencias listing related analysis documents
  5. Metrics: Use | Metric | Before | After | Delta | Status | format for code quality metrics, complexity scores, and coverage data
  6. IDs: Use D- for debt items in refactor plans
  7. Bidirectional: If REFACTOR.md references REPORT.md, REPORT.md must reference REFACTOR.md

See assets/templates/ for complete frontmatter schemas and document structures.

Workflow

Step 1: Discovery

Read and explore the target module/file to understand its structure.

Actions:

  1. Read the target path — identify all files, directories, and entry points
  2. Detect the language and framework from file extensions, imports, and config files
  3. Identify the module boundary (what's inside vs. outside the module)
  4. List all files that belong to the module

Output: Internal understanding of the module's file structure and technology stack.

Step 2: Deep Analysis

Analyze the code to understand internal behavior.

Actions:

  1. Identify the module's main responsibilities — what does it do?
  2. Map key functions/classes and their roles
  3. Trace the primary execution flow — entry point to output
  4. Analyze state management — how data flows and transforms
  5. Analyze error handling — how failures are managed
  6. List internal dependencies (other modules in the same project)
  7. List external dependencies (third-party libraries, APIs, services)

Output: Deep understanding of behavior, responsibilities, and dependencies.

Step 3: Communication Mapping

Understand how the module talks to the rest of the system.

Actions:

  1. Identify what the module consumes (imports, API calls, events listened to)
  2. Identify what the module exposes (exports, API endpoints, events emitted)
  3. Classify communication types: synchronous (function calls, HTTP) vs. asynchronous (events, queues, WebSockets)
  4. Identify shared state (global stores, shared databases, caches)

Output: Clear map of module boundaries and communication channels.

Step 4: Report Generation

Produce the structured technical report with all findings.

Actions:

  1. Write the report following the Output Structure (see below)
  2. Generate Mermaid diagrams for visual understanding
  3. Save the report to {output_dir}/technical/module-analysis/{module-name}/
  4. Add ## Referencias section at the end of the report (link to REFACTOR.md if v3, link to any other analysis documents for the same module)

Output: Complete markdown report with diagrams.

Step 5: Refactor Recommendations (v3 only)

If the user requests a v3 analysis, add improvement suggestions.

Actions:

  1. Identify code smells and architectural issues
  2. Suggest specific, actionable improvements
  3. Rate each recommendation by impact and effort
  4. Prioritize recommendations
  5. Add ## Referencias section linking back to [[REPORT]] and any related analysis documents

Output: Actionable refactoring roadmap appended to the report.

Output Location

All reports are saved to a central technical documentation directory:

{output_dir}/technical/module-analysis/
└── {module-name}/
    ├── REPORT.md              # Main technical report
    └── REFACTOR.md            # Refactoring recommendations (v3 only)

Naming convention: Use the module's folder name in kebab-case.

  • /src/modules/OrderService{output_dir}/technical/module-analysis/order-service/
  • /apps/core/payments{output_dir}/technical/module-analysis/payments/
  • /src/services/notification-service.ts{output_dir}/technical/module-analysis/notification-service/

Output Structure

See assets/templates/ for complete document structures:

  • REPORT.md — Technical analysis report template with Executive Summary, Technical Analysis, Module Communication, Technical Diagrams, Metrics, and Referencias sections
  • REFACTOR.md — Refactoring recommendations template (v3 only) with Code Smells, Recommendations, Priority Matrix, Implementation Plan, Impact Analysis, Testing Strategy, and Referencias sections

Key Sections Overview

REPORT.md includes:

  1. Executive Summary (module overview, purpose, criticality, technology)
  2. Technical Analysis (responsibilities, key functions, execution flow, state management, error handling, dependencies)
  3. Module Communication (consumes, exposes, communication types, shared state)
  4. Technical Diagrams (Mermaid diagrams based on complexity)
  5. Metrics (code quality metrics using standard format)
  6. Referencias (bidirectional links to related documents)

REFACTOR.md (v3 only) includes:

  1. Code Smells (issues with severity ratings)
  2. Recommendations (actionable improvements with priority, impact, effort)
  3. Priority Matrix (visual representation of recommendations)
  4. Implementation Plan (phased refactoring roadmap)
  5. Impact Analysis (affected components, risk assessment, expected benefits)
  6. Testing Strategy (validation approach)
  7. Referencias (link back to REPORT.md)

Analysis Depth Levels

Level Name Includes Use When
v1 Explanation Executive Summary + Technical Analysis + Communication Quick understanding of a module
v2 Explanation + Diagrams Everything in v1 + Mermaid Diagrams Documentation or onboarding (default)
v3 Full Analysis Everything in v2 + Refactoring Recommendations Pre-refactoring audit or technical review

Default: If the user doesn't specify a level, use v2.

Critical Patterns

Pattern 1: Read Before You Write

Always read the actual code before generating any analysis. Never produce a report based on file names, folder structure, or assumptions alone. If a file can't be read, document it as "inaccessible" rather than guessing its contents.

Pattern 2: Explain, Don't Copy

The report explains how code works — it does not reproduce it. Use short inline snippets (1-3 lines) only when necessary to illustrate a specific pattern or behavior. Never paste full functions, classes, or files.

Bad: Pasting a 50-line function into the report Good: "The processPayment() function validates the input, calls the payment gateway via gateway.charge(), and emits a payment.completed event on success."

Pattern 3: Explicit Unknowns

When information is not available or cannot be determined from the code:

Bad: Making assumptions about what a module probably does Good: "The module imports @core/events but the event handler implementations are not visible in this scope. The specific events consumed could not be determined."

Pattern 4: Dependency Honesty

Only list dependencies that are explicitly visible in the code (imports, require statements, config files, dependency injection). If a dependency is suspected but not confirmed, mark it as "suspected" with reasoning.

Pattern 5: Context-Appropriate Diagrams

See assets/helpers/diagram-guidelines.md for detailed Mermaid diagram selection criteria, syntax examples, and best practices. Match diagram complexity to module complexity (simple = flowchart only, medium = flowchart + sequence, complex = flowchart + sequence + class/C4).

Pattern 6: Technology-Agnostic Analysis

The analysis framework works for any language or framework. Adapt terminology to match the technology:

Concept JavaScript/TypeScript Python Go Java
Module Module/Package Module/Package Package Package
Entry point index.ts / export __init__.py main.go Application.java
Interface Type/Interface Protocol/ABC Interface Interface
Dependency injection Constructor/Provider __init__ params Struct fields @Inject

Best Practices

Before Analysis

  1. Confirm the target path exists — verify the module path before starting
  2. Identify the project type — monorepo, single app, microservice, library
  3. Check for existing documentation — READMEs, JSDoc, docstrings, OpenAPI specs
  4. Ask for context if needed — don't guess business requirements

During Analysis

  1. Start from entry points — find the main export, router, or handler first
  2. Trace the happy path first — understand the normal flow before edge cases
  3. Map dependencies as you go — build the dependency graph incrementally
  4. Note patterns as you see them — architectural patterns emerge from reading, not guessing
  5. Check test files — tests reveal intended behavior and edge cases

After Analysis

  1. Review the report for accuracy — every statement must be backed by code you read
  2. Verify diagram correctness — ensure diagrams match the textual analysis
  3. Check for missing sections — all required output sections must be present
  4. Save to the correct location{output_dir}/technical/module-analysis/{module-name}/

Integration with Other Skills

With universal-planner

Use code-analyzer during the Analysis Phase (Step 1) of universal-planner to understand the current state of modules that will be affected by the planned work.

With universal-planner (EXECUTE mode)

Before executing a sprint that modifies a module, run code-analyzer to document the "before" state for comparison.

Limitations

  1. Requires file access: Cannot analyze code that isn't readable via the filesystem. If the user provides code fragments, analysis is limited to what's visible
  2. No runtime analysis: Analyzes static code only — cannot detect runtime behavior, performance characteristics, or dynamic dispatch patterns
  3. Single module focus: Analyzes one module at a time. Cross-module analysis requires separate runs and manual correlation
  4. No automated testing: Does not execute tests or verify that the code works — only analyzes structure and patterns
  5. Framework detection: May not recognize custom or obscure frameworks. The user can provide framework context to compensate

Post-Production Delivery

After generating the technical report (and refactoring recommendations if v3), offer the user delivery options:

  1. Sync to Obsidian vault — invoke the obsidian skill in SYNC mode (see invocation below)
  2. Move to custom path — user specifies a destination and files are moved there
  3. Keep in staging — leave files in {output_dir} for later use

Ask the user which option they prefer.

Obsidian invocation (option 1):

  • Preferred: Skill("obsidian"), then say "sync the files in {output_dir} to the vault"
  • Alternative: Say "sync the output to obsidian" (triggers auto_invoke)
  • Subagent fallback: Read the obsidian SKILL.md and follow SYNC mode workflow

Troubleshooting

Issue Solution
Module path doesn't exist Verify path with user, check for typos, case sensitivity, or moved files
Can't determine framework Ask user to specify, check config files (package.json, requirements.txt, etc.)
Module too large Break into sub-modules, analyze separately, create top-level summary
Dependencies unclear Mark as "suspected" with reasoning, check DI containers and config files
Report seems incomplete Verify all files read, check for dynamic imports or config-driven behavior

Example Output

See assets/templates/REPORT.md and assets/templates/REFACTOR.md for complete examples including Executive Summary, Communication Maps, Mermaid diagrams, and all other sections.

Future Enhancements

Multi-module analysis, dependency graph visualization, automated change detection, test coverage integration, and export to Confluence/Notion.

Weekly Installs
42
GitHub Stars
4
First Seen
Feb 9, 2026
Installed on
opencode42
gemini-cli41
github-copilot41
amp41
codex41
kimi-cli41