llm-models
Fail
Audited by Gen Agent Trust Hub on Feb 18, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- REMOTE_CODE_EXECUTION (CRITICAL): The skill explicitly instructs users to execute
curl -fsSL https://cli.inference.sh | sh. This is a critical security risk as it downloads and runs a script from the internet with the user's shell privileges without any verification or pinning of the content. - EXTERNAL_DOWNLOADS (HIGH): The skill downloads and installs software from
cli.inference.sh, which is not a member of the Trusted External Sources list. This increases the risk of supply chain attacks or typosquatting. - COMMAND_EXECUTION (MEDIUM): The skill requests broad permissions to execute any command starting with
infshviaallowed-tools: Bash(infsh *). This capability, combined with the unverified installation method, provides a significant attack surface. - PROMPT_INJECTION (LOW): The skill is vulnerable to Indirect Prompt Injection (Category 8).
- Ingestion points: Untrusted data enters the agent context via the
--inputargument ininfsh app runcommands across various examples inSKILL.md. - Boundary markers: Absent. There are no delimiters or instructions provided to the LLM to ignore potentially malicious instructions within the input JSON.
- Capability inventory: The
infshtool facilitates network operations to external model providers and executes subcommands. - Sanitization: Absent. The user-provided prompt is interpolated directly into the model request without escaping or filtering.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
- AI detected serious security threats
Audit Metadata