onboard-agent
MANDATORY PREPARATION
Invoke /agent-workflow — it contains workflow principles, anti-patterns, and the Context Gathering Protocol. Follow the protocol before proceeding — if no workflow context exists yet, you MUST run /teach-maestro first.
Bootstrap a new agent workflow from scratch, or add a new agent to an existing system.
Step 1: Establish Conventions
## Workflow Conventions
### Prompt Format
- Delimiter style: [XML tags / markdown headers / triple-dash]
- Section order: [System → Context → Instructions → Input]
- Output format: [JSON with schema / markdown template]
### Tool Conventions
- Naming: [verb_noun / noun.verb / camelCase]
- Description template: [What → When → When Not → Returns]
- Error format: [{ code, message, details }]
### Logging
- Format: [JSON structured]
- Required fields: [workflow_id, step, timestamp, level]
### File Structure
- Prompts: [prompts/workflow-name/v1.md]
- Tools: [tools/tool-name.{ext}]
- Config: [config/environment.yaml]
- Tests: [tests/workflow-name/]
Step 2: Create Initial Structure
project/
├── prompts/ # System prompts, versioned
├── tools/ # Tool definitions
├── config/ # Environment-specific configuration
├── tests/ # Golden test sets and evaluation suites
├── logs/ # Runtime logs (gitignored)
└── .maestro.md # Workflow context
Step 3: Create the First Agent
- System prompt: Role definition with constraints
- 2-3 essential tools: Start with the minimum viable tool set
- Output schema: Define expected output format
- One golden test: At least one test case with known-good output
- Basic error handling: Structured error responses
- Logging: Structured log output for each run
Step 4: Verify
- Run the agent with the golden test case
- Verify error handling works (send bad input)
- Verify logging captures useful context
Recommended Next Step
After onboarding, run /diagnose for a baseline health check, then /fortify to add production-grade error handling.
NEVER:
- Start building without establishing conventions
- Create tools without descriptions
- Skip the golden test case
- Over-scope the initial agent (start minimal, amplify later)
More from sharpdeveye/maestro
agent-workflow
Use when any Maestro command is invoked — provides foundational workflow design principles across prompt engineering, context management, tool orchestration, agent architecture, feedback loops, knowledge systems, and guardrails.
133diagnose
Use when the user wants to find problems, audit workflow quality, or get a comprehensive health check on their AI workflow.
131evaluate
Use when the user wants a quality review, interaction audit, or to test the workflow against realistic scenarios.
130calibrate
Use when workflow components are inconsistent, naming conventions vary, or a new team member's work needs alignment to project standards.
125fortify
Use when the workflow lacks error handling, has been failing in production, or needs retry logic, fallback strategies, and circuit breakers.
125streamline
Use when the workflow feels too complex, has accumulated cruft, or has redundant steps and overlapping tools that need consolidation.
125