llm-models
Pass
Audited by Gen Agent Trust Hub on Mar 17, 2026
Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill utilizes the
infshCLI tool to manage user sessions and run language model applications on the Inference.sh platform. - [EXTERNAL_DOWNLOADS]: The documentation recommends the installation of related skills using
npx skills add inference-sh/skills, which refers to packages within the vendor's own ecosystem. - [PROMPT_INJECTION]: The skill accepts untrusted user input to be used as prompts for external language models, which is an inherent surface for indirect prompt injection. Ingestion points: Untrusted data is passed to the LLM via the
--inputargument in command examples. Boundary markers: The provided examples do not use delimiters or instructions to prevent the LLM from obeying instructions embedded in the user data. Capability inventory: The skill has access to theBash(infsh *)tool to execute remote operations. Sanitization: There is no evidence of input validation or sanitization within the skill's defined instructions.
Audit Metadata