map-architecture

Installation
SKILL.md

Analyse Architecture

You are the project's architecture analyst. Take a requested domain, or infer one from the task, then walk the codebase and map its touchpoints end to end. Capture the result as durable Markdown and Mermaid diagrams under the closest git repo root in architecture/.

When to run

Run this skill when:

  • The user asks to map a domain, feature, module, flow, or screen architecture
  • The user wants interfaces, functions, callsites, branches, or dependencies traced
  • The user asks for Mermaid diagrams or architecture docs to be created
  • The user asks for a high-level overview plus deeper drill-down diagrams

Golden rules

  1. Never write outside the closest git repo root.
  2. Start with discovery before diagramming.
  3. Trace concrete touchpoints: screens, entry points, interfaces, types, functions, callsites, conditionals, side effects, storage, network, and navigation.
  4. Keep diagrams readable. Split large graphs by concern, but combine overlapping domains when that makes the model easier to follow.
  5. Use sub-agents when the codebase or requested domain is large enough that parallel discovery will materially reduce time or blind spots. Never spawn more than 3 sub-agents.
  6. Prefer evidence over interpretation. Mark inferred relationships explicitly.
  7. Create an overview artifact whenever the codebase has multiple screens, routes, or top-level flows.
  8. Diagrams support concise written findings; they are not the whole output.
  9. Default to behavioral depth, not just structural breadth. A useful map explains what happens during lifecycle, success, failure, and state transitions.
  10. Treat screens and flows as first-class units. For UI areas, map what the user sees, what triggers work, what functions run, and what state is rendered.

Output structure

Write to:

  • architecture/overview.md
  • architecture/<domain>/README.md
  • architecture/<domain>/screens/<screen>.md
  • architecture/<domain>/flows/<flow>.md
  • architecture/<domain>/*.md

Use a stable filesystem-safe domain name such as home, auth, or settings. Use stable filesystem-safe screen and flow names such as checkout-summary, profile-edit, or pull-to-refresh.

Workflow

Step 1 - Find repo root and scope

  • Identify the requested domain. If none is given, infer the most relevant bounded area from the user request and state that assumption.
  • Determine whether the task needs one domain, multiple overlapping domains, or only a top-level overview.
  • Decide whether the scope is small enough to map inline or large enough to justify delegation.

Step 2 - Delegate discovery when warranted

Spawn sub-agents only when they help. Good triggers:

  • The repo has several top-level apps, packages, or feature areas
  • The requested domain spans multiple layers such as UI, state, backend integration, and persistence
  • The user asked for multiple domains, or for both overview and deep drill-downs
  • Early discovery shows too many files or call chains to inspect efficiently in one pass

Rules for delegation:

  • Cap delegation at 3 sub-agents.
  • Give each sub-agent a disjoint slice, for example one domain each, or UI vs data layer vs cross-cutting integrations.
  • Keep the main agent responsible for the top-level map, synthesis, diagram structure, and final files.
  • Do not wait idly. While sub-agents explore, continue with repo-level discovery, folder structure, and overview drafting.
  • If the scope is small or tightly coupled enough that delegation would add coordination overhead, stay inline.

Ask sub-agents for concrete outputs:

  • Entry points and key files
  • Important call chains and dependencies
  • Branching logic and side effects
  • Open questions and inferred edges that still need verification

Step 3 - Discover touchpoints

Trace the domain through the codebase. Include, where relevant:

  • Screens, routes, views, and navigation entry points
  • Public interfaces, protocols, services, stores, controllers, and models
  • Functions, methods, event handlers, and async jobs
  • Callsites and inbound callers
  • Key conditionals, feature flags, guards, and branching paths
  • Network requests, persistence, caching, and background work
  • Cross-domain dependencies and shared abstractions

For each major screen, flow, or entry point, also extract:

  • Lifecycle triggers such as initial load, appear/mount, focus, refresh, retry, submit, background resume, and teardown
  • The ordered function or method chain invoked by each trigger
  • Success paths, failure paths, empty states, loading states, and disabled states
  • State producers and consumers: view model state, store slices, derived values, selectors, bindings, and props
  • Error handling behavior: where errors are caught, transformed, ignored, surfaced to UI, retried, or logged
  • Helpers, utilities, formatters, adapters, and mappers used by the area, plus what each helper is responsible for
  • Tight coupling points: files that change together, files that know too much about each other, and cross-layer shortcuts
  • Architectural patterns in use, such as MVVM, Redux-style store, coordinator, service layer, repository, observer, dependency injection, or ad hoc patterns
  • Reasonable inferred intent behind conditional logic or structure, for example performance tradeoffs, backward compatibility, staged rollout, defensive validation, or UI consistency

Do not stop at naming components. Follow execution. If a screen or flow exists, identify what the user action or lifecycle event is, what code path it enters, what state changes occur, and what UI can be emitted as a result.

Use fast code search first, then open only the files needed to verify relationships. When sub-agents are used, consolidate their findings into one verified model before diagramming. Resolve overlaps and contradictions explicitly rather than copying notes through unchanged.

Step 4 - Group the architecture

Organise findings into the smallest useful set of diagrams, for example:

  • Screen or route flow
  • Dependency graph
  • Data flow
  • Decision or conditional flow
  • External integration map

Combine diagrams when domains overlap heavily. Split diagrams when a single graph becomes hard to read. For non-trivial domains, split the written analysis into focused screen-level and flow-level artifacts instead of collapsing everything into one long README.

Step 5 - Write the domain docs

For each domain, create architecture/<domain>/README.md with:

  • Scope
  • Entry points
  • Screen map
  • Flow map
  • Inferred architecture patterns
  • Core components
  • Coupling and change-risk hotspots
  • External dependencies
  • Links to screen and flow deep dives
  • Open questions or inferred edges

Add Mermaid diagrams in the same file or adjacent Markdown files when separate diagrams are clearer. Prefer multiple focused files over one shallow summary.

For non-trivial domains, also create:

  • architecture/<domain>/screens/<screen>.md for each important screen, route, or view container
  • architecture/<domain>/flows/<flow>.md for each important lifecycle, user journey, async process, submission path, sync path, or error-recovery path

Use the domain README as the index and synthesis layer, not the place to dump every detail.

Use this structure by default for non-trivial domains:

# <Domain>
## Scope
## Entry Points
## Screen Map
## Flow Map
## Architecture Pattern
## Lifecycle Flows
## Coupling
## External Dependencies
## Screen Deep Dives
## Flow Deep Dives
## Open Questions

Within those sections:

  • Name concrete files and symbols, not just layers
  • Show ordered call sequences where possible
  • Separate observed facts from inferred intent
  • Call out missing or inconsistent state handling if discovered
  • When a view state or error path is implied but not directly rendered in code, mark it as inferred

Use this structure by default for screens/<screen>.md:

# <Screen>
## Purpose
## Entry Conditions
## Rendered States
## Lifecycle Triggers
## Function Call Chains
## State Sources and Sinks
## Success and Error Handling
## Helpers
## Coupled Files
## Conditionals and Inferred Decisions
## Open Questions

Use this structure by default for flows/<flow>.md:

# <Flow>
## Purpose
## Start Trigger
## Participating Files
## Ordered Execution Path
## State Transitions
## Success Outcome
## Error Outcome
## Retry or Recovery Behavior
## Helpers and Shared Logic
## Coupling and Risk
## Inferred Design Rationale
## Open Questions

Screen docs should answer:

  • What the user sees
  • What starts work
  • Which functions run in order
  • Which states can render
  • How success, empty, loading, and error outcomes differ

Flow docs should answer:

  • What kicks the flow off
  • Which files and symbols participate
  • How data and control move step by step
  • Where branching, retries, or failure handling happen
  • Why the implementation may have been structured this way

Step 6 - Write the overview

Create architecture/overview.md when the project has more than one screen, route, or domain of interest. Show how major screens or flows tie together and reference deeper docs such as architecture/<domain>/README.md.

Step 7 - Confirm succinctly

Return:

  • Domains analysed
  • Files written
  • Main architectural findings
  • Any inferred or unresolved relationships

Diagram guidance

  • Use Mermaid.js flowcharts or sequence diagrams unless another Mermaid format is clearly better.
  • Keep node labels short and descriptive.
  • Prefer a few connected diagrams over one unreadable graph.
  • If a relationship is inferred rather than directly observed, label it as inferred in the Markdown near the diagram.
  • For UI domains, prefer at least these diagrams when evidence supports them:
    • Screen navigation or route map
    • Lifecycle sequence from user or framework trigger to rendered state
    • State transition or decision flow for loading, success, empty, and error outcomes
    • Dependency or coupling graph for the files that jointly implement the area
Related skills
Installs
5
Repository
lmcjt37/skills
First Seen
Mar 26, 2026