llm-models
Fail
Audited by Gen Agent Trust Hub on Feb 18, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION] (CRITICAL): The skill documentation explicitly instructs users to run
curl -fsSL https://cli.inference.sh | shin the Quick Start section ofSKILL.md. This pattern is a critical vulnerability as it executes code directly from a remote, untrusted server without any verification or integrity checks. - [EXTERNAL_DOWNLOADS] (MEDIUM): The skill utilizes
npxto fetch additional skills from theinference-shorganization inSKILL.md. Since this organization is not on the trusted list, these downloads are considered unverifiable and risky. - [COMMAND_EXECUTION] (MEDIUM): The skill requests permission to use the
Bashtool withinfsh *inSKILL.md. This allows for broad execution of commands through a CLI tool that was installed via an insecure method, significantly increasing the attack surface. - [PROMPT_INJECTION] (LOW): The skill exhibits an indirect prompt injection surface as it processes untrusted user data via LLM prompts without documented boundary markers or sanitization. Evidence: (1) Ingestion at
SKILL.mdvia model prompt examples; (2) Boundary markers are absent; (3) Capability forBashexecution exists via theinfshtool; (4) No sanitization or escaping steps are documented.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
- AI detected serious security threats
Audit Metadata