llm-models
Fail
Audited by Gen Agent Trust Hub on Mar 4, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The skill's setup instructions involve running 'curl -fsSL https://cli.inference.sh | sh'. This pattern downloads and immediately executes a shell script from a remote server. While common for CLI tools, this represents a high-risk pattern as the remote script's content could be modified to execute arbitrary code in the user's environment.
- [COMMAND_EXECUTION]: The skill utilizes the Bash tool to execute 'infsh' commands. This allows the agent to interact with the local filesystem and network via the CLI tool.
- [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it accepts untrusted data and interpolates it into JSON structures for LLM processing ('infsh app run ... --input'). This can allow attackers to bypass agent constraints by embedding instructions in the data being processed.
- Ingestion points: User-provided prompts passed into the '--input' JSON parameter in file SKILL.md.
- Boundary markers: No explicit delimiters or instructions are used to separate user data from system instructions in the provided examples.
- Capability inventory: The agent has access to the Bash tool and can execute CLI-based model calls.
- Sanitization: No sanitization or validation of the input strings is performed before they are passed to the external model API.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
Audit Metadata