dotnet-graphify-dotnet
graphify-dotnet
Trigger On
graphify,graphify run,graphify watch,graphify benchmark, orgraphify config- generating
graph.json,graph.html,graph.svg,graph.cypher,GRAPH_REPORT.md,obsidian/, orwiki/ - building onboarding maps, architecture snapshots, or dependency-discovery artifacts from a repository
- choosing between AST-only extraction and AI-enriched semantic extraction
- pushing graph output into Neo4j, Obsidian, wiki docs, or CI artifacts
Workflow
- Confirm the problem is structural discovery, architecture review, onboarding, or graph export. If the user only needs one symbol lookup, one bug fix, or one dependency trace, normal repo search and tests are cheaper than a full graph run.
- Install and verify the tool before doing anything else:
dotnet --version dotnet tool install -g graphify-dotnet graphify --version - Start with a bounded AST-only run so the first output is fast and deterministic:
graphify run ./src --format json,html,report --provider none --verbose - Review outputs in this order:
GRAPH_REPORT.mdfor quick signalgraph.htmlfor visual explorationgraph.jsonfor scripting and downstream tooling
- Add AI enrichment only when inferred relationships or conceptual grouping matter more than strict syntax-only structure.
- Expand export formats for the real consumer:
svgfor static docs and PRsneo4jfor graph queriesobsidian,wikifor knowledge-base or onboarding flows
- Use
watchfor iterative architecture work, but rerun a cleanrunperiodically because deletes and renames can leave stale references behind. - Run
benchmarkonly after you already trust the generatedgraph.json; its value is comparative token-reduction evidence, not billing-grade accounting.
Architecture
flowchart LR
A["Repository or subtree"] --> B["graphify run / watch"]
B --> C{"AI provider configured?"}
C -->|No| D["AST extraction only"]
C -->|Yes| E["AST + semantic extraction"]
D --> F["Knowledge graph + Louvain communities"]
E --> F
F --> G{"Output target"}
G -->|Human review| H["graph.html + GRAPH_REPORT.md"]
G -->|Automation| I["graph.json"]
G -->|Static docs| J["graph.svg"]
G -->|Knowledge base| K["obsidian/ or wiki/"]
G -->|Graph queries| L["graph.cypher for Neo4j"]
Practical Recipes
Write a quick architecture snapshot
graphify run . --format html,report --output ./artifacts/graph
Use this when you need a fast human-readable map of the current repo. Read ./artifacts/graph/GRAPH_REPORT.md first, then open ./artifacts/graph/graph.html.
Write queryable and documentation exports
graphify run ./src --format json,neo4j,svg,obsidian,wiki --output ./graphify-out
Use this when the graph will be consumed by scripts, Neo4j, docs, or knowledge-base tooling instead of only a browser.
Read and benchmark an existing graph
graphify benchmark ./graphify-out/graph.json
Treat this as a heuristic efficiency check for AI-context workflows after the graph already exists.
Provider Choice
none: best first run, deterministic, fast, no external dependenciesollama: local and privacy-friendly; good for sensitive code or low-cost experimentationazureopenai: enterprise-hosted semantic extraction with explicit endpoint, key, and deploymentcopilotsdk: lowest-friction option for teams that already authenticate with GitHub Copilot
Choose the provider by operational constraint first, not by model hype:
- privacy or offline requirements:
ollama - enterprise Azure governance:
azureopenai - fastest setup for existing subscribers:
copilotsdk - no semantic extraction required:
none
Configuration Patterns
graphify resolves settings in this priority order:
- CLI arguments
- user secrets
- environment variables
appsettings.local.jsonappsettings.json
Use graphify config for the interactive wizard and graphify config show to inspect the resolved effective settings.
Common environment-variable patterns:
# AST-only explicit override
export GRAPHIFY__Provider=None
# Ollama
export GRAPHIFY__Provider=Ollama
export GRAPHIFY__Ollama__Endpoint=http://localhost:11434
export GRAPHIFY__Ollama__ModelId=llama3.2
# Azure OpenAI
export GRAPHIFY__Provider=AzureOpenAI
export GRAPHIFY__AzureOpenAI__Endpoint=https://myresource.openai.azure.com/
export GRAPHIFY__AzureOpenAI__ApiKey=...
export GRAPHIFY__AzureOpenAI__DeploymentName=gpt-4o
# GitHub Copilot SDK
export GRAPHIFY__Provider=CopilotSdk
export GRAPHIFY__CopilotSdk__ModelId=gpt-4.1
Tradeoffs And Constraints
- AST-only mode is reliable for structural facts such as files, classes, methods, and imports, but it will not infer conceptual links that are absent from syntax.
- AI enrichment produces richer graphs but adds latency, provider setup, quota or subscription concerns, and privacy review.
watchmode is an inner-loop accelerator, not a perfect source of truth. Deleted files are not fully removed from the graph until a clean rebuild, and renames can temporarily duplicate nodes.graph.htmlis great for quick inspection, but large graphs can render slowly and some browsers blockfile://loading. Serve the output folder locally if the page renders blank.- graphify respects
.gitignore, so an empty graph can be a path-selection problem instead of a parser failure. benchmarkis approximate. The source uses heuristic token estimation, so treat the numbers as directional rather than invoice-grade.
Deliver
- a justified choice of AST-only vs AI-enriched extraction
- concrete
graphifycommands for the repo, folder, or output consumer - the right export-format set for humans, docs, scripts, or graph databases
- configuration guidance that fits the chosen provider and operating model
- a validation path for the produced graph artifacts
Validate
dotnet --versionshows a .NET 10 SDKgraphify --versionresolves after installationgraphify run <path> --format json,html,report -vcompletes without provider or path errors- the output folder contains the expected artifacts for the selected formats
graphify config showreflects the intended provider configuration when AI enrichment is enabledgraphify benchmark <graph.json>runs only after a real graph file exists
Load References
references/source-map.md- upstream repository and docs map with direct links to the README, CLI docs, provider setup guides, sample project, and export-format docsreferences/usage-and-operations.md- practical commands, provider setup patterns, export selection, watch-mode behavior, troubleshooting, and benchmark caveats
More from managedcode/dotnet-skills
dotnet
Primary router skill for broad .NET work. Classify the repo by app model and cross-cutting concern first, then switch to the narrowest matching .NET skill instead of staying at a generic layer.
18dotnet-aspnet-core
Build, debug, modernize, or review ASP.NET Core applications with correct hosting, middleware, security, configuration, logging, and deployment patterns on current .NET.
13dotnet-entity-framework-core
Design, tune, or review EF Core data access with proper modeling, migrations, query translation, performance, and lifetime management for modern .NET applications.
12dotnet-code-review
Review .NET changes for bugs, regressions, architectural drift, missing tests, incorrect async or disposal behavior, and platform-specific pitfalls before you approve or merge them.
11dotnet-architecture
Design or review .NET solution architecture across modular monoliths, clean architecture, vertical slices, microservices, DDD, CQRS, and cloud-native boundaries without over-engineering.
11dotnet-signalr
Implement or review SignalR hubs, streaming, reconnection, transport, and real-time delivery patterns in ASP.NET Core applications.
10