agents-md
Covers AGENTS.md (OpenCode, multi-agent harnesses) and CLAUDE.md (Claude Code). Same principles, different loading mechanics — substitute CLAUDE.md when applicable. Language/framework-agnostic.
Include only what genuinely helps. Never delete useful information without relocating it first.
References
Read on demand — do not load all reference files at once.
| When the user mentions... | Read |
|---|---|
| Auditing an existing file | references/audit-example.md |
| Testing, TDD, test conventions | references/tdd.md |
| Monorepos, hierarchical systems, file size | references/hierarchical.md |
| Flagging problems, reviewing quality | references/anti-patterns.md |
| Compacting or optimizing an AGENTS.md | references/compaction.md |
| Coverage, completeness, what topics to investigate | references/coverage-checklist.md |
Load references/coverage-checklist.md after Phase 1 answers to guide repo investigation. Use during Phase 2 to structure gap presentation. Use during auditing for the gap analysis step.
Core Principles
- Minimum viable requirements. Each line must earn its cost on nearly every session. Every line loads every session — brevity has direct cost benefits. Target under 200 lines for root files. Files exceeding 200 lines show measurable compliance degradation. When reviewing a file that exceeds this threshold, always cite the 200-line target and recommend restructuring.
- Two failure modes. (1) Length — compliance degrades uniformly as instruction count grows. A 500-line AGENTS.md will be partially ignored. (2) Task-irrelevant requirements — correct but unneeded instructions still get followed, increasing cost.
- Compaction is not summarization. Relocate content to scoped sub-files or
@importtargets — never paraphrase or drop details. Fewer lines in root, not fewer lines total. - Don't send an LLM to do a linter's job. Use actual linters, wired to hooks if the harness supports it.
- Don't ship auto-generated files unedited.
/initoutput (e.g.claude init,opencode init) is stuffed w/ docs the agent can already read — directory trees, npm script lists, file summaries. Rewrite before committing: strip inferrable content, keep only what the agent cannot discover on its own. - Don't list inferrable commands. Standard commands like
dev,build,start,lintare inferrable — the agent reads package.json / Makefile / pyproject.toml directly. Only document commands whose names don't reveal purpose (e.g.,cf-typegen,db:migrate). Listing inferrable commands is the "command dump" anti-pattern. When flagging this issue, always use the full word "inferrable" (not "infer", not "inference" — the full word "inferrable"). - Test through public interfaces. Mock only at system boundaries — external APIs, databases, time, file system. Never mock internal collaborators — it couples tests to implementation details, not behavior. When advising on test rules, always use the phrase "system boundaries". See
references/tdd.mdfor details. - Architecture in root: one sentence. Name the pattern + key top-level boundaries (e.g., "Hexagonal architecture. Domain in
src/domain/, adapters insrc/adapters/."). Deeper context — invariants, module responsibilities, where business logic lives — belongs in scoped sub-files where it loads only when relevant. Exception: scoped sub-files can carry rich architectural context for their area.
Single File vs. Hierarchical System
Single root file — simple projects (one app, one language, one team). Target under 200 lines.
Hierarchical system — monorepos, large codebases, multiple apps/packages/services. When the user mentions monorepo, multiple teams, multiple apps/packages, or a large codebase, immediately recommend the hierarchical system and explain these key advantages before running intake:
- The harness auto-loads context files as the agent navigates into subdirectories — no manual loading needed.
- Sub-files can be richer and more detailed than root — because they only load when the agent is working in that area, they can carry deep architectural context, verbose conventions, and specific invariants without bloating every session. Root must stay lean; sub-files don't.
- Shared facts belong in the shallowest file covering all relevant paths (Least Common Ancestor). Never duplicate across siblings.
State the hierarchical recommendation first, then ask only the remaining relevant Phase 1 questions. See references/hierarchical.md for file size management, hierarchical rules, and monorepo exclusions.
Writing a New AGENTS.md
# Project or Area Name
One sentence: what it does and why it exists.
## Stack
Tech stack. Package manager or build tool (be explicit — agents assume defaults).
Architecture pattern + key top-level boundaries (1-2 lines max — e.g., "Hexagonal. Domain in `src/domain/`, adapters in `src/adapters/`." or "DDD, 3 bounded contexts: orders/, payments/, catalog/.").
Path aliases if non-standard. Infrastructure if non-obvious (DB, cache, queue).
Directory tree only if ownership boundaries aren't obvious. 1-2 levels max.
## Development
Verification commands only: typecheck, lint, test. What to run before finishing.
Skip inferrable commands (npm run dev, build, start — agent reads package.json).
Include only non-obvious commands whose names don't reveal purpose
(e.g. `cf-typegen`, `db:migrate`, `dotnet ef database update`).
## Conventions
Only things the agent can't infer from reading the code.
No style rules — use a linter.
Universal performance rules belong here (e.g., N+1 prevention, pagination requirements).
Universal security rules belong here (e.g., secrets management, validation layer).
Scoped or detailed rules (architecture boundaries, caching strategy, auth patterns) → sub-files.
Add a Reference Docs section only if the agent genuinely needs it before working in that area.
Interactive Intake
Mandatory for new AGENTS.md creation — always ask questions before writing from scratch. If the user says "write me an AGENTS.md" or any variant without providing existing content, your first response must be to ask the Phase 1 questions — not to start writing.
Exceptions — proceed immediately without intake:
- The user provides AGENTS.md content directly in the prompt → audit it immediately, present findings and gap table in the same turn, ask follow-up questions only if needed.
- The user asks a specific, pointed question (e.g., "what should we add for N+1 queries?", "is this section good?") → answer it directly with concrete guidance. Don't redirect to a questionnaire.
- The user is asking about one specific aspect (e.g., bug workflow, testing conventions) → answer it. Intake is for building a complete AGENTS.md from scratch, not for every AGENTS.md-related question.
Use the question tool (OpenCode) or AskUserQuestion tool (Claude Code) to ask each question interactively. Keep wording identical across harnesses. Repo-agnostic: do not assume frontend/backend distinctions.
Skip questions the user already answered. If the user's request directly signals preferences (e.g., "audit my AGENTS.md and remove stale content" → optimization=audit+remove), skip those Phase 1 questions and confirm the inferred answers. If the architectural decision is clear (e.g., monorepo with multiple teams → hierarchical system), state the recommendation with reasoning first, then ask only the remaining relevant questions.
Q6 (bug workflow) is never skippable. Even when the user provides all other preferences upfront, always ask Q6 — it cannot be inferred from source, audience, format, depth, or optimization choices.
Phase 1: Preferences (before repo investigation)
Ask these questions before exploring the codebase — do not skip this step:
- Source — Best practices from existing AGENTS.md, discovered from the repo, or both?
- Audience — Primary audience: agents only, humans only, or both?
- Format — Short checklist or structured doc w/ sections?
- Depth — Rule + short rationale, or just the rule?
- Optimization — Make AGENTS.md more token-efficient (compact, zero info loss), audit and remove/relocate content, or both?
- Bug workflow — When fixing bugs: write a failing test first that reproduces the bug, then fix it? Or jump straight to the fix? (Always ask — cannot be inferred from other preferences.)
- Architecture — Does the codebase follow a specific architecture pattern? (DDD, hexagonal, clean architecture, MVC, event-driven, CQRS, layered) — or should I infer it from the code?
- Performance — Are there specific performance conventions the team follows? (N+1 prevention, pagination strategy, caching rules, batch size limits, lazy vs. eager loading)
- Security — Are there security patterns the agent should follow? (auth/authz approach, input validation strategy, secrets management, CORS policy)
Conditional follow-ups:
- Repo discovery or both → summarize patterns or cite exact examples?
- Both audiences → separate agent-facing and human-facing content into different sections?
- Compact or both → load
references/compaction.mdand apply passes before presenting results. - Test-first bug workflow → use subagents for fix attempts (parallel candidates validated against the failing test), or single-pass fix? See
references/tdd.md§ "Test-First Bug Fixing" for the workflow to include. - Q7 names a specific pattern → "What are the key boundaries or modules? Where does business logic live vs. infrastructure/adapter code?"
- Q7 = DDD → "What are the main bounded contexts? Any aggregates or domain events the agent should know about?"
- Q7 = event-driven → "What's the event bus or message broker? What are the main event types?"
- Q8 mentions ORM/database → "Which ORM? Any rules about eager vs. lazy loading or query patterns to avoid?"
- Q8 mentions caching → "What's the caching strategy? (TTL-based, invalidation-based, cache-aside?) Any cache boundaries to respect?"
- Q8 mentions pagination → "What pagination style? (cursor-based, offset-based?) Max page sizes?"
- Q9 mentions auth → "What's the auth pattern? (JWT, session-based, OAuth?) Where does authorization logic live?"
- Q9 mentions input validation → "Validation at which layer? (controller, domain, both?) Which library?"
- Q9 mentions secrets → "How are secrets managed? Any rules about what must never be hardcoded?"
Then investigate: scan for conventions, configs, linter rules, CI, directory structure, existing AGENTS.md files, and patterns worth codifying. Load references/coverage-checklist.md to guide the investigation — systematically check each relevant topic area (architecture, performance, security, error handling, data access, etc.) so gaps surface in Phase 2, not after the fact.
Phase 2: Findings Review (after repo investigation)
Present the 5-8 highest-impact discoveries and ask the developer to classify each one. Summarize the remainder (e.g., "13 additional lint rules found — handled by tooling; 4 path-scoped conventions moved to sub-files").
Also present gaps — topics relevant to this repo that aren't covered anywhere in the AGENTS.md system. Use references/coverage-checklist.md to identify them. Format:
I found documented conventions for: [topics covered].
These topics appear relevant but aren't covered:
| Topic | Signal found | Suggested action |
|---|---|---|
| Architecture | [what code structure suggests] | Document in root + sub-file |
| Performance | [what was found, e.g., pagination in controllers] | Ask: N+1 rules? Caching? |
| Security | [auth middleware found / nothing found] | Ask about auth pattern |
| Error handling | [nothing found] | Ask if strategy exists |
Which of these gaps matter for your project? (add / not needed / handled elsewhere)
The user classifies each gap alongside the classification of existing content.
Placement decision (per finding):
- Keep in root AGENTS.md
- Move to nested AGENTS.md at [suggested path]
- Move to @import doc or path-scoped rules file
- Skip — handled by linter/tooling
- Skip — not useful
Conditional Phase 2 questions:
- Stale content (if AGENTS.md conflicts w/ repo) — update from repo, keep as-is, or remove?
- Scope (when placement is ambiguous) — whole repo or scoped to a specific area?
- Nesting (if nested placement selected) — which directory boundary should own it?
- Pointer (if content moved to nested file) — include a pointer from root?
Workflow
Phase 1 (preferences + architecture/perf/security questions) → Repo investigation (load coverage-checklist.md) → Phase 2 (findings review + gap presentation) → [Compact if selected] → Plan file structure → Draft/Audit
Planning file structure before writing: Based on Phase 1 answers, Phase 2 findings, and the gap analysis, decide what sub-files are needed before writing anything. Simple projects write to root only. Complex projects plan root + sub-files, with each sub-file owning a clear boundary (e.g., src/api/AGENTS.md, src/domain/AGENTS.md). This avoids writing root content that then needs to be split.
Auditing an Existing AGENTS.md
Auditing is refactoring, not summarization. Every correct piece of information must end up somewhere. Never compress to reduce line count. Load references/anti-patterns.md to flag common problems.
Audit Workflow
- Measure. Count lines, distinct instructions, style rules, overview sections.
- Classify each instruction:
- Essential and universal → keep in root
- Correct but scoped → relocate to sub-file (e.g.,
src/api/AGENTS.md) or path-scoped rule, add pointer from root - Style/lint rule → remove (use linter/hook)
- Redundant, stale, or inferrable → remove
- Gap analysis. Load
references/coverage-checklist.md. For each topic area relevant to this repo, check whether the AGENTS.md system (root + all sub-files) addresses it. Present a gap table to the user — what's missing that likely matters — and let them classify each as "add / not needed / handled elsewhere." Seereferences/audit-example.md§ "Gap Analysis" for a worked example and the gap table format. - Relocate before removing. When you find correct-but-scoped content, relocate it to the appropriate sub-file (e.g.,
src/api/AGENTS.md). Always say "relocate" — not "move", "extract", or "create a file for". Relocation is an atomic three-step sequence:- Create the destination sub-file with the full original content — no paraphrasing.
- Verify the destination contains every relocated instruction.
- Replace the original content in root with a pointer (e.g., "See
src/api/AGENTS.md"). Never delete from root until relocation is confirmed.
- Hierarchical systems: check if root content belongs in a sub-file, and if sub-files duplicate LCA knowledge.
- Present results: before/after line counts, what moved where, what removed and why, what gaps were identified and how they were resolved, complete rewritten file(s).
Maintenance
Update affected AGENTS.md files leaf-first on significant changes.
Testing Effectiveness
An AGENTS.md works if agent behavior changes. After writing or auditing:
- Run a task without the file, note deviations.
- Add/update instructions targeting those deviations.
- Re-run, verify behavior shifts.
If the agent ignores a rule, the file is likely too long. If the agent asks questions answered in the file, phrasing may be ambiguous.
More from nkootstra/skills
zig-best-practices
Comprehensive Zig expertise covering allocators, comptime, error handling, build system, C interop, SIMD, volatile, atomic, align, and performance. Use when writing, reviewing, debugging, or refactoring Zig code. Triggers: Zig, .zig files, build.zig, build.zig.zon, zig test, zig build, allocators, comptime, SIMD, volatile, atomic, align, or any Zig-specific concept.
15create-agents
Write, audit, and improve AGENTS.md files for AI coding agents. Use when creating or improving agent context for a codebase.
14context-guardian
>
12create-skill
>-
6markdown-compact
Compact, compress, or minify markdown files to use fewer tokens while preserving all information and meaning. Use this skill whenever the user wants to reduce the size of a markdown file, shrink a README, compress a SKILL.md or CLAUDE.md, minify documentation, or make any markdown more token-efficient. Trigger even if they just say "make this shorter" or "compress this" on a markdown file.
2