ollama-integration
Fail
Audited by Gen Agent Trust Hub on Feb 27, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: Fetches and executes an installation script from the official Ollama website using 'curl -fsSL https://ollama.com/install.sh | sh'.
- [COMMAND_EXECUTION]: Provides instructions to execute system commands including 'brew install ollama' and 'ollama pull' for model management.
- [PROMPT_INJECTION]: The OllamaClient implementation contains an indirect prompt injection surface in the generateResponse and generateStreamingResponse methods.
- Ingestion points: The 'prompt' and 'context' arguments in src/utils/ollama.ts are directly interpolated into the request body.
- Boundary markers: Uses simple string labels like 'Context:' and 'Question:' which do not provide robust isolation from malicious content within the context variable.
- Capability inventory: Performs network requests to the Ollama API and processes responses for downstream application use.
- Sanitization: No escaping, filtering, or validation is performed on the input strings before interpolation into the prompt template.
Recommendations
- HIGH: Downloads and executes remote code from: https://ollama.com/install.sh - DO NOT USE without thorough review
Audit Metadata