llm-ollama-integration
Pass
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface by directly embedding untrusted user input into the prompts sent to the LLM.\n
- Ingestion points: User-provided
textis ingested via thetranslatemethod and various prompt factory functions likecreate_chk_to_en_promptandcreate_en_to_chk_promptinSKILL.md.\n - Boundary markers: The skill lacks delimiters (e.g., XML tags, triple backticks) or specific 'ignore embedded instructions' warnings that would prevent the LLM from interpreting the content of the text as new instructions.\n
- Capability inventory: The skill uses the
requestslibrary to perform network operations to an Ollama API (local or remote), allowing it to process and generate content based on these prompts.\n - Sanitization: No input validation, escaping, or filtering is performed on the
textvariable before it is interpolated into the final prompt strings.
Audit Metadata