multi-ai-collab

Warn

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • COMMAND_EXECUTION (MEDIUM): The skill's instructions in references/cli-reference.md and references/workflows/sequential.md demonstrate patterns for executing external AI binaries (Codex, Gemini, Claude) by interpolating untrusted file content and multi-line strings directly into shell commands via subshells (e.g., $(cat src/file.ts)). This method creates a potential for command injection if the processed files contain shell-special characters or malicious payloads.\n- PROMPT_INJECTION (LOW): The 'Sequential' and 'Pipeline' workflows specifically design a data flow where the output of one agent becomes the input for the next. This creates a clear path for indirect prompt injection (Category 8) where a malicious input in the original code under review can manipulate subsequent agents in the chain.\n
  • Ingestion points: Persona prompt templates in references/personas/*.md utilizing [CODE_CONTENT] and [PREVIOUS_ANALYSIS_SUMMARY] placeholders.\n
  • Boundary markers: Absent in persona templates and workflow descriptions.\n
  • Capability inventory: Execution of host CLI tools via shell sub-processes as documented in cli-reference.md.\n
  • Sanitization: No sanitization, escaping, or validation logic is defined for agent-to-agent data transfers.\n- EXTERNAL_DOWNLOADS (LOW): The README and CLI reference point to external sites (OpenAI, Google, Anthropic) and package managers (npm, Homebrew) for tool installation. While the sources are reputable, the reliance on external binaries increases the attack surface for the environment where the skill is used.\n- DATA_EXPOSURE (SAFE): The scripts/detect-agents.sh script checks for the presence of API keys (OPENAI_API_KEY, etc.) in environment variables but only verifies if they are set, without logging or printing the actual secret values.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 17, 2026, 06:23 PM