ai-orchestration-feedback-loop
Fail
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTIONREMOTE_CODE_EXECUTION
Full Analysis
- [COMMAND_EXECUTION]: The skill instructs the agent to execute CLI commands using shell subshell expansion (
$(cat ...)) to interpolate local file contents (such asplan.mdorimplementation.md) into command arguments. This pattern allows for arbitrary command execution on the host system if the file contents contain shell metacharacters, backticks, or subshell syntax.\n- [PROMPT_INJECTION]: The skill is highly vulnerable to indirect prompt injection due to its core workflow of reading local project files and passing them as context to subsequent AI models without sanitization.\n - Ingestion points: Files including
.ai-orchestration/plan.md,.ai-orchestration/implementation.md, and.ai-orchestration/phase5b_handoff.mdare read into CLI prompts using shell expansion.\n - Boundary markers: Prompt templates in the
references/directory lack delimiters (such as XML tags) or explicit instructions to the AI to ignore instructions embedded within the provided file content.\n - Capability inventory: The skill utilizes file manipulation tools (Edit/Write/Read) and shell execution capabilities through the Codex and Gemini CLIs.\n
- Sanitization: No content sanitization or filtering is performed on the input files; Phase 5c only describes basic syntax validation for the final output.\n- [REMOTE_CODE_EXECUTION]: The 'Auto-integrate' feature in the Co-Implementation workflow allows code generated by external AI models to be automatically written to the project filesystem. If an AI model is manipulated via indirect injection through the orchestration loop, it could generate and integrate malicious code or backdoors directly into the workspace.
Recommendations
- AI detected serious security threats
Audit Metadata