llm-models
Fail
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- [REMOTE_CODE_EXECUTION] (CRITICAL): The skill's 'Quick Start' instructions include the command
curl -fsSL https://cli.inference.sh | sh. This is a critical security risk as it executes a remotely hosted script directly in the user's shell without any prior verification or sandboxing. The domaininference.shis not among the verified trusted sources. - [EXTERNAL_DOWNLOADS] (HIGH): The installation process fetches binary files from
dist.inference.sh. While the documentation claims to perform checksum verification, the process is entirely dependent on the integrity of an untrusted third-party server and script. - [COMMAND_EXECUTION] (MEDIUM): The skill requests broad
Bashtool permissions for theinfshcommand. This grants the agent the capability to perform various actions on the system, including logging into remote accounts and running arbitrary AI applications distributed by the provider. - [INDIRECT_PROMPT_INJECTION] (LOW): The skill creates a surface for indirect prompt injection as it processes and displays output from external LLMs without visible boundary markers or sanitization.
- Ingestion points:
infsh app runmodel outputs (SKILL.md). - Boundary markers: Absent.
- Capability inventory:
Bash(infsh *)(SKILL.md). - Sanitization: Absent.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
- AI detected serious security threats
Audit Metadata