ai-rag-pipeline
Fail
Audited by Gen Agent Trust Hub on Mar 8, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: Fetches the CLI installation script and binary from the vendor's official domains (cli.inference.sh and dist.inference.sh).
- [REMOTE_CODE_EXECUTION]: The Quick Start guide recommends an installation method using
curl -fsSL https://cli.inference.sh | sh, which pipes a remote script directly into the shell for execution. - [COMMAND_EXECUTION]: The skill requires the
Bash(infsh *)tool permission to execute the vendor's CLI commands for searching and running LLM models. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it ingests untrusted data from external web sources and interpolates it into prompts without sufficient sanitization or boundary markers.
- Ingestion points: Web search results from Tavily and Exa, as well as content extracted from arbitrary URLs.
- Boundary markers: Absent; untrusted variables like
$SEARCH,$CONTENT, and$EVIDENCEare placed directly into the LLM prompt strings. - Capability inventory: The skill can execute arbitrary subcommands of the
infshCLI tool. - Sanitization: No escaping or validation is performed on the retrieved web content before it is sent to the LLM.
Recommendations
- HIGH: Downloads and executes remote code from: https://cli.inference.sh - DO NOT USE without thorough review
Audit Metadata