ai-documenter-agent
AI Documenter Agent Skill
Surface hidden assumptions, extract tribal knowledge, and decompose problems into well-documented sub-problems with dependency graphs. Produces standardized documentation in docs/ that other specialized AI agents can consume to implement solutions without making assumptions.
Overview
This skill operates as a structured interview process. Given a problem or feature description, it:
- Discovers existing documentation to avoid re-asking known context
- Iteratively asks questions to surface hidden assumptions and tribal knowledge
- Separates business context from technical context
- Decomposes the problem into the smallest logical sub-problems
- Builds a dependency DAG among sub-problems (Mermaid + YAML)
- Documents everything in standardized folders with clear known/unknown tracking
- Produces documentation that other AI agents can directly consume
Critical principle: Never stop questioning until the user confirms no assumptions remain. When someone says "obviously" or "of course" — that is tribal knowledge. Probe deeper.
Phase 1: Context Discovery
Before asking any questions, scan the existing documentation structure.
Actions
-
Scan
docs/directory for existing documentation:- Read
docs/readme.md(root index) if it exists - Read
docs/business/readme.mdif it exists - Read
docs/technical/readme.mdif it exists - Read
docs/features/readme.mdanddocs/bugs/readme.mdif they exist - Follow links in these index files to catalog all existing documents and their topics
- Read
-
Present discovery summary to the user:
- List what documentation already exists
- Note what areas have coverage vs. gaps
- This avoids asking questions already answered in existing docs
-
If no
docs/structure exists, inform the user that you will create it during the documentation phase.
Phase 2: Initial Problem Understanding
Ask the user to describe their problem or feature. Then classify and scope it.
Required Information
- Problem or feature description (in the user's own words)
- Classification: feature, bug fix, or other
- High-level goal: what should be true when this is done?
- Who requested or needs this? (stakeholder context)
- Any known urgency or timeline constraints
Question Strategy
Start with: "Tell me about the problem or feature you're working on."
Follow up conversationally based on what they share. Do NOT present a checklist. Listen for implicit assumptions in every answer.
Phase 3: Knowledge Extraction — Business Context
Probe for hidden assumptions about business rules, processes, and domain knowledge. This is where tribal knowledge most often hides.
Areas to Probe
- Business rules: What rules govern this domain? Are there exceptions?
- Domain terminology: What terms mean something specific here? (feeds the glossary)
- Stakeholders: Who are the users? Who else is affected? What are their concerns?
- Current process: How is this handled today (manually or otherwise)?
- Business constraints: Regulatory, compliance, contractual obligations
- Success criteria: How will the business measure whether this succeeded?
- Historical context: Why does the current system work this way? Past decisions that constrain the present?
- Edge cases: Unusual but valid scenarios the business handles
Question Strategy
- Ask follow-up questions when answers contain jargon or implicit assumptions
- When the user says "obviously" or "of course" — probe deeper, that is tribal knowledge
- When the user references a process or system by name, ask what it does and why
- Check existing
docs/business/content — skip questions already answered there - Track every answer as known and every gap as unknown
See questioning-framework.md for detailed tribal knowledge extraction techniques.
Phase 4: Knowledge Extraction — Technical Context
Probe for hidden assumptions about architecture, systems, and technical constraints. Only go as deep as needed — the consuming agent may or may not need all technical detail.
Areas to Probe
- System architecture: What systems are involved? How do they communicate?
- Data model: What data entities are relevant? Where do they live?
- Integrations: What external systems, APIs, or services are involved?
- Technology stack: Languages, frameworks, databases, infrastructure
- Performance requirements: Latency, throughput, scale expectations
- Security requirements: Authentication, authorization, data sensitivity
- Testing approach: How is this area tested today? What coverage exists?
- Deployment: How are changes deployed? Any special considerations?
- Technical debt: Known issues, workarounds, or fragile areas near this problem
Question Strategy
- Check existing
docs/technical/content — skip questions already answered there - Ask about technical context only to the depth needed for the problem at hand
- Separate "must know to solve the problem" from "nice to have context"
- If the user doesn't know a technical answer, mark it as unknown — don't skip it
Phase 5: Problem Decomposition
Once you have sufficient context, decompose the problem into the smallest logical sub-problems.
Decomposition Process
- Identify atomic sub-problems — each should be independently understandable and solvable
- Map dependencies — which sub-problems require others to be completed first?
- Build the DAG — create a directed acyclic graph in both Mermaid and YAML formats
- Validate with the user — present the decomposition and ask if anything is missing or incorrectly split
For Each Sub-Problem, Capture
- Clear description of what needs to be solved
- Why this is a separate sub-problem (not merged with another)
- Dependencies (which other sub-problems must be completed first)
- Acceptance criteria (concrete, testable conditions for "done")
- Known constraints specific to this sub-problem
- Examples and edge cases (concrete input/output or before/after)
- Risk level and impact if solved incorrectly
- Links to relevant existing documentation
See decomposition-guide.md for DAG construction methodology.
Phase 6: Sub-Problem Deep Dive
Follow the DAG ordering — start with independent sub-problems (no dependencies) and work through the graph as dependencies are resolved.
For Each Sub-Problem (in DAG Order)
- Ask targeted questions specific to this sub-problem
- Surface assumptions — what is the user taking for granted about this piece?
- Probe for edge cases — "What happens when...?" scenarios
- Define acceptance criteria — concrete, testable conditions
- Assess risk — what breaks if this is wrong? How hard is it to fix?
- Capture examples — concrete input/output or before/after scenarios
Completion Criteria
Continue asking questions until:
- Every sub-problem has clear acceptance criteria
- No question is answered with "it depends" without clarifying what it depends on
- The user confirms there are no remaining assumptions
- All unknowns are explicitly documented as unknowns (not silently skipped)
Important: The DAG informs question ordering as a preference, but allow the conversation to flow naturally when the user takes it somewhere productive. Don't rigidly enforce ordering if a natural thread surfaces important information.
Phase 7: Documentation Generation
Generate all documentation files in the standardized folder structure.
Output Structure
docs/
├── readme.md # Root index (create or update)
├── business/
│ ├── readme.md # Business context index (create or update)
│ ├── glossary.md # Domain terminology (create or update)
│ └── {topic-specific}.md # As needed for reusable business context
├── technical/
│ ├── readme.md # Technical context index (create or update)
│ ├── constraints.md # Non-functional requirements (create or update)
│ └── {topic-specific}.md # As needed for reusable technical context
├── features/{feature-name}/ # For features
│ ├── overview.md # Problem statement, known/unknown, acceptance criteria
│ ├── decomposition.md # Sub-problems with DAG (Mermaid + YAML)
│ └── sub-problems/
│ ├── {sub-problem-1}.md # Individual sub-problem detail
│ └── {sub-problem-2}.md
└── bugs/{bug-name}/ # For bugs
├── overview.md # Bug description, known/unknown
├── decomposition.md # Root cause analysis with DAG
└── sub-problems/
├── {sub-problem-1}.md
└── {sub-problem-2}.md
Documentation Rules
- Link, don't duplicate — If business or technical context already exists in
docs/business/ordocs/technical/, link to it from the feature/bug docs. NEVER copy content. - Update indexes — Always update the relevant
readme.mdindex files to include new documents. - Separate concerns — Business context in
docs/business/, technical context indocs/technical/, problem-specific docs indocs/features/ordocs/bugs/. A consuming agent chooses which folders to read. - Update glossary — Add any new domain terms discovered during questioning to
docs/business/glossary.md. - Track known/unknown — Every document must have explicit Known and Unknown / Open Questions sections.
- Present documents for review — Show each document to the user before writing it. Ask for corrections.
See document-templates.md for complete templates for each document type.
Wrap Up
After generating all documentation, present a summary:
- Documentation map — List all files created or updated with one-line descriptions
- Known/Unknown summary — Aggregate view of what is known and what remains open across all sub-problems
- Recommended next steps — Which sub-problem to tackle first based on the DAG (independent nodes first)
- Agent consumption guide — Brief instructions for pointing other AI agents at this documentation, including which folders are relevant for different agent types
Follow-Up Options
- Deep dive on unknowns — Investigate open questions further to convert unknowns to knowns
- Refine decomposition — Adjust sub-problems or dependencies based on new understanding
- Add more context — Document additional business or technical knowledge discovered later
- Update existing docs — Refresh previously documented context that has changed
- Generate agent prompt — Create a starter prompt that points an implementing agent at the right documentation files
More from teessmartai/skills
trip-planning
Plan comprehensive trips including destination research, accommodations, transportation, activities, and itineraries. Use when someone needs help planning a vacation, business trip, road trip, or any travel requiring coordination of multiple elements.
9project-planning
Create structured project planning documents for new software projects, features, or initiatives. Use when starting a new project and need to generate specifications, implementation plans, and task breakdowns that AI coding agents can use to write code. Outputs include project spec, implementation phases, and a current/next steps task list.
7