llm-models

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • REMOTE_CODE_EXECUTION (CRITICAL): The skill instructs the agent to execute curl -fsSL https://cli.inference.sh | sh. This is a 'pipe-to-shell' pattern that downloads and runs code from an external server with no integrity checks or verification. The domain inference.sh is not a recognized trusted source.
  • EXTERNAL_DOWNLOADS (HIGH): The installation process downloads an opaque binary (infsh) whose behavior cannot be verified through the provided skill source code.
  • COMMAND_EXECUTION (HIGH): The skill requests broad permissions for the Bash tool (infsh *). This allows the agent to execute any functionality provided by the downloaded binary, which was installed via an untrusted remote script.
  • PROMPT_INJECTION (MEDIUM): As an LLM access tool, this skill is highly susceptible to indirect prompt injection.
  • Ingestion points: Data entered via the --input argument or input.json file.
  • Boundary markers: None present in the command structure.
  • Capability inventory: The infsh tool makes external network requests to third-party LLM providers (OpenRouter).
  • Sanitization: There is no evidence of sanitization for data being passed into the LLM prompt, meaning malicious content in processed data could override agent behavior.
Recommendations
  • CRITICAL: Downloads and executes remote code from untrusted source(s): https://cli.inference.sh - DO NOT USE
  • AI detected serious security threats
Audit Metadata
Risk Level
CRITICAL
Analyzed
Feb 16, 2026, 09:16 AM