llm-models
Fail
Audited by Gen Agent Trust Hub on Mar 8, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The skill instructs the agent to run a shell script directly from the internet via
curl -fsSL https://cli.inference.sh | sh. This is a critical security risk as it allows for the execution of arbitrary, unverified code. - [EXTERNAL_DOWNLOADS]: The skill downloads binary files from
dist.inference.shduring the setup process. This relies on the security of the vendor's distribution infrastructure without local verification prior to the first run. - [COMMAND_EXECUTION]: The skill requests permission to execute the
infshCLI tool with any arguments (infsh *). This grants the agent significant control over the local environment and the vendor's API. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection. 1. Ingestion points: Data enters via JSON prompt fields described in
SKILL.md. 2. Boundary markers: Uses basic JSON structure but lacks explicit delimiters or instructions to ignore embedded commands. 3. Capability inventory: Includes shell command execution via theinfshtool. 4. Sanitization: No sanitization or filtering of input data is implemented before it is sent to the LLM.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
- AI detected serious security threats
Audit Metadata