prompt-engineering
Warn
Audited by Gen Agent Trust Hub on Apr 16, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill directs users to download installation instructions and documentation from an external GitHub repository (github.com/inference-sh/skills).
- [REMOTE_CODE_EXECUTION]: The skill utilizes the
npx skills addcommand to fetch and install additional skills from a remote source (inference-sh/skills). This mechanism can lead to the execution of untrusted code if the source repository is compromised or malicious. - [PROMPT_INJECTION]: The skill provides numerous prompt templates that are vulnerable to indirect prompt injection because they ingest untrusted data and interpolate it directly into model prompts without sanitization or boundary markers.
- Ingestion points: Prompt fields in the
infsh app runcommands and placeholders like[code]or[article text]in the templates. - Boundary markers: Absent. The templates do not use delimiters or include instructions to the model to disregard instructions contained within the user-supplied data.
- Capability inventory: The skill enables interaction with powerful generative models (Claude, GPT-4, FLUX, Veo) via the
infshCLI. - Sanitization: No input validation or escaping mechanisms are implemented for external data before it is processed by the AI models.
Audit Metadata