docs-generator
Documentation Generator
Restructure project documentation for clarity and accessibility.
Repo Sync Before Edits (mandatory)
Before making any changes, sync with the remote to avoid conflicts:
branch="$(git rev-parse --abbrev-ref HEAD)"
git fetch origin
git pull --rebase origin "$branch"
If the working tree is dirty, stash first, sync, then pop. If origin is missing or conflicts occur, stop and ask the user before continuing.
Workflow
0. Create Feature Branch
Before making any changes:
- Check the current branch - if already on a feature branch for this task, skip
- Check the repo for branch naming conventions (e.g.,
feat/,feature/, etc.) - Create and switch to a new branch following the repo's convention, or fallback to:
feat/docs-generator
1. Analyze Project
Scan the project to understand its shape.
Use sub-agents for parallel discovery. Launch multiple Agent tool calls concurrently to keep the main context clean:
- Agent 1 — Stack detection: Scan for
package.json,pyproject.toml,Cargo.toml,go.mod,pom.xml, and identify the project type (library, API, web app, CLI, microservices), architecture (monorepo, multi-package, single module), and primary language(s). Return a structured summary. - Agent 2 — Existing docs inventory: List all existing documentation files (README.md, docs/, CONTRIBUTING.md, CHANGELOG.md, etc.) and summarize their current state — present, missing, or outdated. Return a checklist.
- Agent 3 — User personas & project purpose: Read the main entry point, existing README, and any project description fields to determine the project's purpose, key features, and target user personas (end users, developers, operators). Return a short summary.
Collect the results from all three agents before proceeding.
2. Restructure Documentation
Use sub-agents for parallel file creation. The documentation targets below are independent of each other. Dispatch them concurrently using the Agent tool, then collect results:
- Agent A — Root README.md: Streamline as the project's front door using the project summary from Step 1. Include:
- Project name + one-line description
- Badges (build status, version, license)
- Key features (bullet list, 3-5 items)
- Quickstart (install + first use in < 5 min)
- Modules/components summary with links
- Contributing link + License
- Agent B — Component READMEs: Add per module/package/service documentation using the architecture info from Step 1. Include:
- Purpose and responsibilities
- Setup instructions specific to the component
- Testing commands
- Agent C — docs/ directory: Create only the files that are relevant to the project type identified in Step 1. Target structure:
docs/ ├── architecture.md # System design, component diagrams ├── api-reference.md # Endpoints, authentication, examples ├── database.md # Schema, migrations, ER diagrams ├── deployment.md # Production setup, infrastructure ├── development.md # Local setup, contribution workflow ├── troubleshooting.md # Common issues and solutions └── user-guide.md # End-user documentation
Each agent should return the path(s) of files it created or updated.
Not every project needs all of these. A CLI tool likely needs a user-guide but not an api-reference. A library needs api-reference but not deployment. Use judgment.
3. Create Diagrams
Use Mermaid for visual documentation embedded directly in markdown:
- Architecture diagrams: Show components and their relationships
- Data flow diagrams: Show how data moves through the system
- Database schemas: ER diagrams for relational models
Example:
```mermaid
graph TD
A["Client"] --> B["API Gateway"]
B --> C["Auth Service"]
B --> D["Core Service"]
D --> E["Database"]
```
4. Quality Checklist
After generating docs, verify:
- All internal links work (no broken references)
- Code examples are accurate and runnable
- No duplicate information across files
- Consistent formatting and heading levels
- Existing content preserved (enhanced, not replaced)
Guidelines
- Keep docs concise and scannable — prefer bullet lists and tables over prose
- Adapt structure to project type (skip categories that don't apply)
- Maintain cross-references between related docs
- Remove redundant or outdated content
- Use real examples from the codebase, not generic placeholders
More from montimage/skills
skill-auditor
Analyze agent skills for security risks, malicious patterns, and potential dangers before installation. Use when asked to "audit a skill", "check if a skill is safe", "analyze skill security", "review skill risk", "should I install this skill", "is this skill safe", "scan this skill", or when evaluating any skill directory for trust and safety. Also triggers when the user pastes a skill install command like "npx skills add https://github.com/org/repo --skill name". Produces a comprehensive security report with a clear install/reject verdict. Trigger this skill proactively whenever the user is about to install a third-party skill or mentions concerns about skill safety.
30code-review
Perform code reviews following best practices from Code Smells and The Pragmatic Programmer. Use when asked to "review this code", "check for code smells", "review my PR", "audit the codebase", "find bugs", "check code quality", "what's wrong with this code", "is this code good", or any request for quality feedback on code changes. Supports both full codebase audits and focused PR/diff reviews. Outputs structured markdown reports grouped by severity. Trigger this skill whenever the user wants a second opinion on code, even if they don't explicitly say "review".
15skill-creator
Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, package a skill for distribution, or iterate on skill quality. Trigger this skill whenever the user says "create a skill", "build a skill", "make a skill for X", "update this skill", "improve this skill", "package this skill", or mentions wanting to extend Claude's capabilities with specialized workflows or tools.
9oss-ready
Transform projects into professional open-source repositories with standard components. Use when users ask to "make this open source", "add open source files", "setup OSS standards", "create contributing guide", "add license", "prepare for public release", "add CODE_OF_CONDUCT", "add SECURITY.md", "GitHub templates", or want to prepare a project for public release with README, CONTRIBUTING, LICENSE, and GitHub templates. Trigger this skill whenever the user mentions open-sourcing, public repos, community standards, or making a project contribution-ready — even if they just say "let's open source this".
7test-coverage
Expand unit test coverage by targeting untested branches and edge cases. Use when users ask to "increase test coverage", "add more tests", "expand unit tests", "cover edge cases", "improve test coverage", "find untested code", "what's not tested", "run coverage report", "write missing tests", or want to identify and fill gaps in existing test suites. Adapts to project's testing framework. Trigger this skill whenever the user mentions test gaps, untested code, coverage percentages, or wants to harden their test suite.
7devops-pipeline
Implement pre-commit hooks and GitHub Actions for quality assurance. Use when asked to "setup CI/CD", "add pre-commit hooks", "create GitHub Actions", "setup quality gates", "automate testing", "add linting to CI", "setup code quality checks", "configure CI pipeline", "add automated checks", or any DevOps automation for code quality. Detects project type and configures appropriate tools. Trigger this skill whenever the user mentions CI, CD, pre-commit, GitHub Actions, linting automation, or quality gates — even if they don't use those exact terms.
7