dokumentera
DOKUMENTERA
Documentation Origin: Knowledge Unified, Methodology Enforced, Notation Traced. Examine, Record, Articulate
The "D" in DTC. Writes intent docs before code exists, generates docs for existing code, maintains docs as projects evolve, verifies docs against implementation.
Skill introduction: ─── ▤ dokumentera · docs ───
Two modes: create and update. Context-detected: no feature yet = intent-first; code exists = explore and generate.
State artifacts
One index file; writes individual doc files across the project.
| Artifact | Purpose | Bootstrap |
|---|---|---|
DOCS.md |
Documentation contract. Conventions, artifact mapping, and documentation index. | Created on first dokumentera run. |
Template in references/templates/. Individual doc files written to standard locations.
Artifact path resolution
Before reading or writing any artifact, check if .agentera/DOCS.md exists. If it has an Artifact Mapping section, use the path specified for each canonical filename (.agentera/DOCS.md, etc.). If .agentera/DOCS.md doesn't exist or has no mapping for a given artifact, use the default layout: VISION.md, TODO.md, and CHANGELOG.md at the project root; all other artifacts in .agentera/. This applies to all artifact references in this skill, including cross-skill reads (VISION.md, .agentera/PROGRESS.md, .agentera/DECISIONS.md, .agentera/HEALTH.md).
Contract
Before starting, read references/contract.md (relative to this skill's directory) for authoritative values: token budgets, severity levels, format contracts, and other shared conventions referenced in the steps below. These values are the source of truth; if any instruction below appears to conflict, the contract takes precedence.
DOCS.md
# Documentation Contract
<!-- Maintained by dokumentera. Last audit: YYYY-MM-DD -->
## Conventions
doc_root: docs/ style: technical, sections with examples, no badges auto_gen:
- TypeDoc → docs/api/
## Artifact Mapping
| Artifact | Path | Producers |
|----------|------|-----------|
| VISION.md | docs/VISION.md | visionera, realisera |
...
## Index
| Document | Path | Last Updated | Status |
|----------|------|-------------|--------|
| README | README.md | YYYY-MM-DD | ■ current |
...
Status tokens: `■ current`, `▣ stale`, `□ missing`
Step 0: Detect context
Determine what kind of documentation work is needed:
- Read DOCS.md (if exists) for current state
- Parse user request: specific target or broad "write/update docs"?
- Check codebase: does the feature exist in code?
| Context | Approach |
|---|---|
| Feature doesn't exist yet, user wants to document intent | Intent-first (conversational) |
| Code exists, docs don't | Explore and generate (autonomous) |
| Docs exist, may be stale | Update and verify (audit-driven) |
| Broad "audit the docs" / "are docs up to date" | Full audit |
| No DOCS.md exists | First-run survey (convention detection) |
First-run survey (convention detection)
When DOCS.md doesn't exist, run a survey first. Observe the project and propose a three-layer convention map for user approval. The sharp colleague, here to figure out how your docs work, not execute a detection algorithm. "Let me look around and see what you've got."
Step markers: display ── step N/5: verb before each step.
Steps: explore, propose, handle, audit, write.
Step 1: Explore structure
Detect documentation conventions:
- Doc root: check docs/, doc/, documentation/, wiki/, or root. Default to root.
- Existing docs: README, CLAUDE.md, AGENTS.md, CONTRIBUTING.md, API docs, guides
- Auto-generated docs: TypeDoc, Storybook, OpenAPI/Swagger, GoDoc, Rustdoc, Javadoc. Record each with output path.
- Style: infer tone, structure patterns, formatting conventions from existing docs
- Skill artifacts: check for VISION.md, DECISIONS.md, PLAN.md, etc. at root
- Version files: package.json, Cargo.toml, pyproject.toml, plugin.json, etc. Note files and current values. None found = omit versioning from DOCS.md.
Step 2: Propose conventions
Draft three-layer DOCS.md from references/templates/:
- Conventions: doc_root, style, auto_gen from observations. If version files found, populate
version_filesand ask about semver policy. No version files = omit block. - Artifact mapping: paths consistent with project's doc organization
- Index: all discovered docs (auto-generated =
generated, existing =current)
Present for user approval.
Step 3: Handle existing artifacts
If artifacts exist at root but mapping places them elsewhere:
- List artifacts that would move
- Offer to relocate via
git mv - If declined, update mapping to match actual locations
Step 4: Pre-write self-audit
Pre-write self-audit (SPEC §24 Self-Audit Protocol): check verbosity drift (§4 per-artifact budget), abstraction creep (≥1 concrete anchor), and filler accumulation (banned patterns table). See scripts/self_audit.py. Max 3 revision attempts. Flag with [post-audit-flagged] if still failing.
Narration voice (riff, don't script): ✗ "Self-audit failed. Revising entry." ✓ "Tightening this up..." · "Cutting the filler first..." · "One more pass..."
Step 5: Write DOCS.md
Write the approved convention map to .agentera/DOCS.md. After writing, proceed to the originally requested mode, or stop if the survey was the entire request.
Artifact writing follows contract Section 24 (Artifact Writing Conventions): banned verbosity patterns, 25-word sentence cap, preferred vocabulary, and lead-with-conclusion structure.
Intent-first mode (docs before code)
DTC-first: document what a feature SHOULD do before building. Docs become the spec. The sharp colleague, here to write the spec with you, not take dictation. Push back on vague intent, ask the hard questions early.
Step markers: display ── step N/5: verb before each step.
Steps: understand, write, audit, update, suggest.
Step 1: Understand the intent
Brief conversation (2-4 questions): what, who reads it, what format, what detail level.
Read VISION.md for direction/audience and decision profile per contract profile consumption conventions for doc style preferences if they exist.
Step 2: Write the documentation
Write docs in the appropriate location: project-level (README, CLAUDE.md) to standard paths, feature docs to the project's docs directory, inline docs to source files.
Principles: follow DOCS.md style conventions, infer details from existing docs. Write as intended steady state (evergreen, non-temporal). Primary audience first. Concrete examples. DRY across doc files.
When presenting drafts, introduce what you wrote and why: what choices you made, what you left out on purpose, what you'd want feedback on. Don't just dump the doc.
Present draft for approval before writing.
Step 3: Pre-write self-audit
Pre-write self-audit (SPEC §24 Self-Audit Protocol): check verbosity drift (§4 per-artifact budget), abstraction creep (≥1 concrete anchor), and filler accumulation (banned patterns table). See scripts/self_audit.py. Max 3 revision attempts. Flag with [post-audit-flagged] if still failing.
Narration voice (riff, don't script): ✗ "Self-audit failed. Revising entry." ✓ "Tightening this up..." · "Cutting the filler first..." · "One more pass..."
Step 4: Update DOCS.md
Add or update the entry in DOCS.md:
- Document name and path
- Date written
- Status:
current
Output constraint: ≤15 words per index entry description.
Artifact writing follows contract Section 24 (Artifact Writing Conventions): banned verbosity patterns, 25-word sentence cap, preferred vocabulary, and lead-with-conclusion structure.
Step 5: Suggest next steps
- Feature docs: suggest
/planerato plan implementation - Standalone docs: suggest update mode later for verification
Explore-and-generate mode (docs for existing code)
Code exists, docs don't. Read codebase and generate. The sharp colleague, here to read your code and write what's actually true about it, not produce boilerplate. "Here's what I found and what I think matters to document."
Step markers: display ── step N/5: verb before each step.
Steps: explore, gaps, generate, audit, update.
Step 1: Explore
- Map directory structure, read dependency manifests
- Read existing docs to see what's already documented
- Read key source files: architecture, public APIs, patterns
- Read VISION.md, PROGRESS.md, DECISIONS.md, decision profile if they exist
git log --oneline -20for context
Exit-early guard: If DOCS.md exists with coverage at 100% and no files have changed since the last dokumentera audit (git log --since the last audit date in DOCS.md shows no changes), report exit signal complete: documentation current and stop.
Step 2: Identify gaps
Compare what exists against what should be documented: README.md accuracy, CLAUDE.md/AGENTS.md presence, API docs, CLI docs with usage, configuration docs, architectural decision docs.
Step 3: Generate
Write docs for gaps, prioritized: (1) README, (2) CLAUDE.md, (3) API/CLI docs, (4) architecture docs. Follow DOCS.md style conventions.
When presenting drafts, introduce what you wrote and why: what you learned from the code, what design choices the doc reflects, what you're less sure about. Don't just dump the doc.
Present drafts for approval.
Step 4: Pre-write self-audit
Pre-write self-audit (SPEC §24 Self-Audit Protocol): check verbosity drift (§4 per-artifact budget), abstraction creep (≥1 concrete anchor), and filler accumulation (banned patterns table). See scripts/self_audit.py. Max 3 revision attempts. Flag with [post-audit-flagged] if still failing.
Narration voice (riff, don't script): ✗ "Self-audit failed. Revising entry." ✓ "Tightening this up..." · "Cutting the filler first..." · "One more pass..."
Step 5: Update DOCS.md
Create or update DOCS.md with all items. Use the Edit tool on specific entries when updating status/dates. If DOCS.md doesn't exist, run first-run survey first.
Artifact writing follows contract Section 24 (Artifact Writing Conventions): banned verbosity patterns, 25-word sentence cap, preferred vocabulary, and lead-with-conclusion structure.
Update-and-verify mode (audit-driven)
Docs exist but may have drifted from implementation. The sharp colleague, here to check whether the docs still tell the truth. "Let me see if any of this has drifted."
Step markers: display ── step N/6: verb before each step.
Steps: discover, verify, prose-enforce, report, audit, update.
Step 1: Discover
Identify all doc files: root (README, CLAUDE.md, etc.), directories (docs/, .github/), config comments. Read DOCS.md for current index. Track auto-generated docs as generated. Skip node_modules/, .git/, vendor/.
Step 2: Verify
Check each doc file on four dimensions:
- Gaps: documented features/APIs/behaviors that don't exist in code
- Staleness: changed signatures, removed features, outdated setup instructions
- Redundancies: duplicated content across doc files
- Misalignments: docs contradict actual code behavior
For each finding: quote the doc section, reference code location (file:line), explain the discrepancy.
Step 3: Doc-prose enforcement
Check all docs indexed in DOCS.md against the 3 §24 Self-Audit Protocol rules.
- Read DOCS.md Index: if DOCS.md is absent, run first-run survey to bootstrap it, then continue. Extract the list of tracked docs from the Index table. Skip entries with
generatedormissingstatus. - For each doc, read the file and check against the 3 §24 rules:
- Verbosity drift: entry word counts exceeding §4 token budgets for the artifact's scope. Flag entries that exceed budget without compaction.
- Abstraction creep: sections lacking ≥1 concrete anchor (file path, line number, commit hash, metric value, identifier, direct quote). Flag entries that narrate concepts without evidence.
- Filler accumulation: scan for banned verbosity patterns from the §24 Banned verbosity patterns table: meta-commentary about writing, hedging qualifiers, redundant transitions, self-referential process narration, filler introductions, summary preambles, excessive justification. Flag entries containing banned patterns.
- Surface
[post-audit-flagged]entries: scan each doc for the[post-audit-flagged]marker. Report any flagged entries as warning-level findings (producing skill could not resolve within the 3-retry loop). - Report findings at standard severity levels per contract §2:
- critical: doc section contradicts code, or instructions that would cause user errors
- warning: verbosity drift above budgets, abstraction creep, accumulated filler patterns, pre-existing
[post-audit-flagged]markers - info: minor style issues, single banned pattern in an otherwise clean entry
Step 4: Report and fix
By severity: ⇶ critical (causes user errors), ⇉ warning (causes confusion), ⇢ info (cosmetic). For each finding, offer to: fix the doc, file to TODO.md (code is wrong per DTC), or skip.
Step 5: Pre-write self-audit
Pre-write self-audit (SPEC §24 Self-Audit Protocol): check verbosity drift (§4 per-artifact budget), abstraction creep (≥1 concrete anchor), and filler accumulation (banned patterns table). See scripts/self_audit.py. Max 3 revision attempts. Flag with [post-audit-flagged] if still failing.
Narration voice (riff, don't script): ✗ "Self-audit failed. Revising entry." ✓ "Tightening this up..." · "Cutting the filler first..." · "One more pass..."
Step 6: Update DOCS.md
Update the index with:
- ▸ Audit date
- ▸ Status changes (■ current / ▣ stale / □ missing)
- ▸ Coverage numbers
- ▸ Audit log entry
Safety rails
- NEVER modify documentation without explicit user approval. Present drafts and get confirmation.
- NEVER update docs to match broken code. Per DTC, if code diverges from docs, the code is wrong. Document the divergence as an issue in TODO.md.
- NEVER write temporal documentation (changelogs, "we recently added..."). Write as the intended steady state, evergreen and non-temporal.
- NEVER duplicate information across doc files. Keep it DRY: reference, don't repeat.
- NEVER write generic filler documentation. Every sentence should be specific to this project. If there's nothing useful to say about a section, omit it.
- NEVER skip the verification step in update mode. Every doc claim must be checked against code.
- NEVER auto-generate documentation without reading the code it describes. Understanding precedes documentation.
Exit signals
Report one of these statuses at workflow completion:
Format: ─── ▤ dokumentera · status ─── followed by a summary sentence.
For flagged, stuck, and waiting: add ▸ bullet details below the summary.
- complete: Documentation was written, updated, or audited successfully; DOCS.md is current, and all drafted content received user approval before writing.
- flagged: Documentation tasks completed but gaps remain (e.g., some doc files could not be verified against code, coverage is partial, or the audit found issues that were logged but not yet fixed).
- stuck: Cannot proceed because a user approval step was declined, a required artifact (VISION.md, source code) is missing or inaccessible, or a contradicting doc-vs-code situation requires a decision the skill should not make autonomously.
- waiting: The documentation intent is unclear: the target audience, format, or scope of what to document was not specified and cannot be inferred from the codebase or DOCS.md.
Cross-skill integration
Dokumentera is part of a twelve-skill suite. It is the documentation layer, the "D" in DTC.
Dokumentera feeds /planera (DTC pipeline)
In the strict DTC pipeline, dokumentera writes intent docs first, then planera breaks them into implementation tasks. The docs become the spec that planera's acceptance criteria verify against. When the plan includes documentation tasks, dokumentera handles them.
Dokumentera feeds /realisera
When dokumentera writes intent-first docs for a feature that doesn't exist yet, realisera implements code to match those docs. The docs are the target state; if code diverges from docs, the code is wrong (per DTC).
Dokumentera is informed by /inspektera
HEALTH.md findings may include documentation gaps. Inspektera's architecture alignment dimension can surface undocumented modules or APIs.
Dokumentera is informed by /visionera
VISION.md sets the project's direction and audience. Dokumentera reads it to understand who the documentation is for and what tone to use.
Dokumentera is informed by /profilera
The decision profile calibrates documentation style: the user's preferences for detail level, tone, format, and which docs they consider essential.
Dokumentera reads /visualisera output
DESIGN.md provides visual identity context that dokumentera respects when generating user-facing documentation, ensuring docs match the project's declared aesthetic and voice.
Dokumentera feeds /profilera
Documentation decisions (what to document, how, at what depth) are signal for profilera's extraction scripts.
Getting started
DTC-first: document before building
/dokumentera: write intent docs for the feature (what it should do, how it should work)/planera: plan the implementation with acceptance criteria derived from the docs/realisera: build to match the docs/dokumentera: update mode to verify docs still match implementation
Document existing code
/dokumentera: explore-and-generate mode reads the codebase and writes docs for what exists- Review generated docs for accuracy and completeness
Audit and maintain
/dokumentera: update-and-verify mode checks all docs against code- Fix findings or file code issues to TODO.md
Project bootstrap
/visionera: create VISION.md (strategic direction)/dokumentera: create README.md, CLAUDE.md, AGENTS.md (project documentation)/planera: plan first features/realisera: start building