enrich
MANDATORY PREPARATION
Invoke /agent-workflow — it contains workflow principles, anti-patterns, and the Context Gathering Protocol. Follow the protocol before proceeding — if no workflow context exists yet, you MUST run /teach-maestro first. Consult the knowledge-systems reference in the agent-workflow skill for RAG architecture, chunking strategies, and retrieval patterns.
Add knowledge sources to ground the workflow in facts. Without grounding, agents hallucinate. With grounding, they cite sources.
Knowledge Source Assessment
Identify what knowledge the workflow needs:
| Knowledge Type | Source | Update Frequency | Access Pattern |
|---|---|---|---|
| Domain docs | Internal docs, specs | Monthly | Semantic search |
| Code context | Codebase | Real-time | Code search |
| User data | Database, CRM | Real-time | Structured query |
| External data | APIs, web | Real-time | API call |
| Historical | Logs, past interactions | Daily | Time-range query |
Add RAG Pipeline
For document-based knowledge (consult the knowledge-systems reference in the agent-workflow skill):
- Select documents: Identify the authoritative source documents
- Chunk strategy: Choose chunking based on document type (semantic > token-based)
- Embed: Use appropriate embedding model for the domain
- Index: Store in vector database with metadata
- Retrieve: Implement hybrid search (semantic + keyword)
- Inject: Add retrieved context to the prompt with source attribution
Add Structured Data
For database-backed knowledge:
- Define the query interface: Natural language → structured query
- Add guardrails: Read-only access, query complexity limits
- Format results: Transform raw data into context the model can use
- Attribute: Include data source and freshness in the context
Add Real-Time Data
For live information:
- Identify APIs: What external services provide the needed data
- Cache strategy: How often does the data change? Cache accordingly
- Fallback: What happens when the API is down?
- Attribution: Include data timestamp and source
Enrichment Checklist
- Every knowledge source has attribution (source, date, confidence)
- Retrieval quality tested independently of generation quality
- Chunk sizes tested and optimized for the document types
- Fallbacks exist for all external knowledge sources
- Knowledge base has a refresh/update strategy
- PII is handled appropriately in knowledge sources
Recommended Next Step
After enrichment, run /evaluate to test retrieval quality, or /iterate to set up continuous monitoring of knowledge freshness.
NEVER:
- Index everything without curation (garbage in = garbage out)
- Skip source attribution (hallucination without attribution is undetectable)
- Build RAG without testing retrieval quality first
- Use fixed chunk sizes for all document types
- Assume embedding similarity equals relevance
More from sharpdeveye/maestro
agent-workflow
Use when any Maestro command is invoked — provides foundational workflow design principles across prompt engineering, context management, tool orchestration, agent architecture, feedback loops, knowledge systems, and guardrails.
133diagnose
Use when the user wants to find problems, audit workflow quality, or get a comprehensive health check on their AI workflow.
131evaluate
Use when the user wants a quality review, interaction audit, or to test the workflow against realistic scenarios.
130calibrate
Use when workflow components are inconsistent, naming conventions vary, or a new team member's work needs alignment to project standards.
125fortify
Use when the workflow lacks error handling, has been failing in production, or needs retry logic, fallback strategies, and circuit breakers.
125streamline
Use when the workflow feels too complex, has accumulated cruft, or has redundant steps and overlapping tools that need consolidation.
125