llm-council
Warn
Audited by Gen Agent Trust Hub on Mar 6, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The orchestration protocol described in
SKILL.mdinterpolates the user-provided$PROMPTvariable directly into shell commands. This pattern is vulnerable to command injection if the input contains shell metacharacters such as backticks or semicolons. - [COMMAND_EXECUTION]: The skill invokes the
omega-cursor-clitool using the--yoloand--trustflags. These parameters are designed to bypass interactive safety confirmations and grant implicit trust to model-generated content, increasing the risk of executing unverified or dangerous instructions. - [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection during its peer review and synthesis stages. Responses from Stage 1 models are collected and inserted into subsequent prompts without any sanitization or boundary markers. 1. Ingestion points: Stage 1 model responses are read from temporary files (e.g.,
gemini.txt,codex.txt) and interpolated into Stage 2 and 3 prompts. 2. Boundary markers: The skill does not use delimiters or explicit instructions to prevent the model from obeying instructions embedded within the aggregated responses. 3. Capability inventory: The skill utilizesBash,Read, andWritetools to manage the deliberation process. 4. Sanitization: There is no evidence of escaping or filtering the content of external model responses before they are re-processed.
Audit Metadata