gemini-imagegen

Fail

Audited by Gen Agent Trust Hub on Feb 15, 2026

Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • Indirect Prompt Injection (HIGH): The skill ingests untrusted text prompts which are passed to an LLM capable of file system modification. This creates a high-risk surface for indirect injection. \n
  • Ingestion points: instruction argument in scripts/generate_image.py, scripts/edit_image.py, scripts/compose_images.py; user_input in scripts/multi_turn_chat.py.\n
  • Boundary markers: None. Instructions are interpolated directly without delimiters or system-level warnings.\n
  • Capability inventory: File read/write via PIL (Image.save), network access via the Google GenAI SDK.\n
  • Sanitization: None. External content is not escaped or validated before processing.\n- Path Traversal (HIGH): The interactive chat script allows arbitrary file paths for saving images, enabling unauthorized file overwrites outside the target directory. \n
  • Evidence: In scripts/multi_turn_chat.py, filepath = self.output_dir / filename fails to sanitize the filename argument from the /save command. An attacker can provide absolute paths or traversal sequences (e.g., ../../) to target sensitive system files.\n- Metadata Poisoning (MEDIUM): Documentation and implementation refer to non-existent models such as 'Gemini 3 Pro' and 'Nano Banana Pro'. This misleading metadata can deceive users or automated agents regarding the skill's actual capabilities and security profile.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 15, 2026, 10:09 PM