aigw-orchestrator
SKILL.md
Orchestrate a full Envoy AI Gateway deployment by asking intake questions and composing the appropriate atomic skills. Use this when the user wants to set up AI Gateway from scratch or add a new provider.
Intake Questions
Before generating configuration, ask:
-
Installation
- Do you already have Envoy Gateway installed? If not, we need
/aigw-install. - Do you need rate limiting or InferencePool? (addons)
- Do you already have Envoy Gateway installed? If not, we need
-
Provider
- Which AI provider(s)? (OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, GCP Vertex AI, Cohere, self-hosted/Ollama, etc.)
- For cloud providers: How will you authenticate? (API key, IRSA/Pod Identity, service account, etc.)
-
Routing
- Route by model? (e.g., gpt-4o-mini → backend A, claude-3-5-sonnet → backend B)
- Need failover or traffic splitting?
-
Environment
- Namespace for Gateway and routes?
- Gateway name (if reusing existing)?
Composition Flow
- If fresh install: Run
/aigw-installwith user's version/namespace preferences. - Gateway + ClientTrafficPolicy: Ensure Gateway exists and has ClientTrafficPolicy with
bufferLimit: 50Mi. - For each provider:
- Run
/aigw-backendwith BackendName, Schema, Hostname, Port. (AIServiceBackend must reference Backend, not K8s Service.) - Run
/aigw-authwith PolicyType and AIServiceBackendName; create Secret if API key. At most one BackendSecurityPolicy per backend. - Add BackendTLSPolicy for HTTPS backends.
- Run
- Route: Run
/aigw-routewith GatewayName, BackendNames, and optional ModelHeader for each rule.
Example: OpenAI + Anthropic
Intake: User wants OpenAI (gpt-4o-mini) and Anthropic (claude-3-5-sonnet) behind one Gateway.
Generated flow:
- Install (if needed):
/aigw-install - Gateway + ClientTrafficPolicy (from aigw-route skill)
- Backend + AIServiceBackend for OpenAI:
/aigw-backendBackendName=openai, Schema=OpenAI, Hostname=api.openai.com, Port=443 - BackendSecurityPolicy + Secret for OpenAI:
/aigw-authPolicyType=APIKey, AIServiceBackendName=openai - BackendTLSPolicy for api.openai.com
- Backend + AIServiceBackend for Anthropic:
/aigw-backendBackendName=anthropic, Schema=Anthropic, Hostname=api.anthropic.com, Port=443 - BackendSecurityPolicy + Secret for Anthropic:
/aigw-authPolicyType=AnthropicAPIKey, AIServiceBackendName=anthropic - BackendTLSPolicy for api.anthropic.com
- AIGatewayRoute with two rules:
- Match x-ai-eg-model=gpt-4o-mini → openai
- Match x-ai-eg-model=claude-3-5-sonnet → anthropic
Checklist
- All intake questions answered
- Install steps included if needed
- ClientTrafficPolicy with bufferLimit on Gateway
- Each provider has Backend + AIServiceBackend + BackendSecurityPolicy + BackendTLSPolicy (for HTTPS)
- At most one BackendSecurityPolicy per AIServiceBackend (or InferencePool)
- AIGatewayRoute rules match user's routing intent
- Secrets created for API keys (never hardcode keys in YAML)
Weekly Installs
1
Repository
missberg/envoy-skillsFirst Seen
5 days ago
Security Audits
Installed on
amp1
cline1
opencode1
cursor1
kimi-cli1
codex1