llm-models
Fail
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION] (CRITICAL): The skill instructions include the command 'curl -fsSL https://cli.inference.sh | sh', which is a high-risk piped remote code execution pattern. This is a confirmed detection of an untrusted RCE vector that can lead to complete system compromise.
- [EXTERNAL_DOWNLOADS] (HIGH): The skill relies on binaries and scripts from a non-whitelisted external domain (inference.sh). It also promotes further unverified installations using 'npx skills add'.
- [COMMAND_EXECUTION] (MEDIUM): The skill requests 'Bash(infsh *)' permissions, which provides a significant attack surface for the agent to execute commands via the unverified CLI tool installed from an insecure source.
- [PROMPT_INJECTION] (LOW): The skill processes external inputs via the '--input' flag in 'infsh app run' commands without applying sanitization or boundary markers (Mandatory Evidence: Ingestion point in CLI arguments; Boundary markers are absent; Capability inventory includes bash execution; Sanitization is not present).
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
- AI detected serious security threats
Audit Metadata