llm-models
Pass
Audited by Gen Agent Trust Hub on Apr 9, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill provides instructions to download and install a CLI tool from a remote repository hosted on GitHub (inference-sh/skills).
- [COMMAND_EXECUTION]: The skill utilizes a custom CLI tool,
infsh, to perform operations such as logging in, listing available models, and executing inferences. - [DATA_EXFILTRATION]: User prompts and system instructions are transmitted to external services (OpenRouter and inference.sh) to process language model requests.
- [PROMPT_INJECTION]: The skill exhibits surface area for indirect prompt injection by processing external user input.
- Ingestion points: User-provided prompt text is ingested through the
--inputargument of theinfsh app runcommand inSKILL.md. - Boundary markers: The input is structured as a JSON string, which provides a technical boundary but does not filter the semantic content of the prompt.
- Capability inventory: The skill has the capability to execute shell commands restricted to the
infshbinary and its arguments as defined inallowed-tools. - Sanitization: There is no evidence of built-in sanitization or filtering to prevent the agent from obeying instructions embedded within the user-provided prompt data.
Audit Metadata