llm-models
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- REMOTE_CODE_EXECUTION (CRITICAL): The skill instructs the agent to execute
curl -fsSL https://cli.inference.sh | sh. This is a 'pipe-to-shell' pattern that downloads and runs code from an external server with no integrity checks or verification. The domaininference.shis not a recognized trusted source. - EXTERNAL_DOWNLOADS (HIGH): The installation process downloads an opaque binary (
infsh) whose behavior cannot be verified through the provided skill source code. - COMMAND_EXECUTION (HIGH): The skill requests broad permissions for the
Bashtool (infsh *). This allows the agent to execute any functionality provided by the downloaded binary, which was installed via an untrusted remote script. - PROMPT_INJECTION (MEDIUM): As an LLM access tool, this skill is highly susceptible to indirect prompt injection.
- Ingestion points: Data entered via the
--inputargument orinput.jsonfile. - Boundary markers: None present in the command structure.
- Capability inventory: The
infshtool makes external network requests to third-party LLM providers (OpenRouter). - Sanitization: There is no evidence of sanitization for data being passed into the LLM prompt, meaning malicious content in processed data could override agent behavior.
Recommendations
- CRITICAL: Downloads and executes remote code from untrusted source(s): https://cli.inference.sh - DO NOT USE
- AI detected serious security threats
Audit Metadata