llm-models
Fail
Audited by Gen Agent Trust Hub on Mar 25, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The skill instructs the agent to install its required CLI tool using
curl -fsSL https://cli.inference.sh | sh. Executing remote scripts piped directly into a shell is a dangerous pattern as it bypasses local verification of the code being run. - [COMMAND_EXECUTION]: The skill is configured to use the
Bashtool to execute theinfshCLI and its installation commands. This grants the skill the ability to run arbitrary system commands under the allowed tool scope. - [EXTERNAL_DOWNLOADS]: The installation process involves downloading a binary executable from
dist.inference.sh. While the skill notes checksum verification, the initial script download and execution occur before such checks. - [PROMPT_INJECTION]: Indirect Prompt Injection Surface. The skill takes user input and interpolates it into a JSON object passed to an external LLM via the
infshtool. This could allow maliciously crafted input to influence the behavior of the downstream model. - Ingestion points: User-supplied prompt strings used in commands like
infsh app run ... --input '{"prompt": "..."}'inSKILL.md. - Boundary markers: The skill uses JSON formatting to encapsulate input, which provides some structural separation but is not a security boundary against injection.
- Capability inventory: Access to the
Bashtool and network-connected LLM APIs via theinfshutility. - Sanitization: No evidence of input validation, escaping, or sanitization before interpolation into the JSON payload.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
Audit Metadata