ai-model-routing
Fail
Audited by Gen Agent Trust Hub on Mar 17, 2026
Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
- [COMMAND_EXECUTION]: The skill provides instructions and scripts (such as
scripts/auto-fix.sh) that explicitly enable theclaudeCLI with 'Level 4' permissions. This level grants the AI access to theBashtool, allowing it to execute arbitrary shell commands on the local system. - [PROMPT_INJECTION]: In
scripts/auto-fix.sh, user-provided input is passed directly as a prompt to theclaudeCLI tool while it has broad system permissions. This allows a user to potentially control system behavior through prompt injection. - [DATA_EXFILTRATION]: The toolset granted to the AI across various files includes
Read,Grep, andGlob, which allow the model to access files throughout the project. When paired withBashaccess, this creates a high risk for reading and exfiltrating sensitive data. - [PROMPT_INJECTION]: The skill is susceptible to Indirect Prompt Injection. It demonstrates piping untrusted content like
git diffoutput, error logs, and build results directly into the model prompt inscripts/code-review.shandSKILL.md. - Ingestion points:
scripts/code-review.sh(ingesting git diffs and file content),SKILL.md(piping logs and build outputs directly into the model prompt). - Boundary markers: Absent; untrusted data is not delimited or separated from instructions using any special markers.
- Capability inventory: Full access to
Read,Edit,Write, andBashtools via theclaudeCLI. - Sanitization: No sanitization, escaping, or filtering is performed on external data before it is sent to the model context.
Recommendations
- AI detected serious security threats
Audit Metadata