llm-ollama-integration

Pass

Audited by Gen Agent Trust Hub on Mar 1, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface by directly embedding untrusted user input into the prompts sent to the LLM.\n
  • Ingestion points: User-provided text is ingested via the translate method and various prompt factory functions like create_chk_to_en_prompt and create_en_to_chk_prompt in SKILL.md.\n
  • Boundary markers: The skill lacks delimiters (e.g., XML tags, triple backticks) or specific 'ignore embedded instructions' warnings that would prevent the LLM from interpreting the content of the text as new instructions.\n
  • Capability inventory: The skill uses the requests library to perform network operations to an Ollama API (local or remote), allowing it to process and generate content based on these prompts.\n
  • Sanitization: No input validation, escaping, or filtering is performed on the text variable before it is interpolated into the final prompt strings.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 1, 2026, 01:10 AM