picture
Pass
Audited by Gen Agent Trust Hub on Feb 21, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- Prompt Injection (LOW): The skill is vulnerable to indirect prompt injection because it ingests untrusted user input to form the image generation prompt.
- Ingestion points: User-provided text strings are passed via CLI arguments to
imagen.shand then togenerate.py. - Boundary markers: Absent; user input is interpolated directly into the API request content without delimiters or secondary instructions.
- Capability inventory:
generate.pyperforms network requests to the Google GenAI API and writes image files to the local filesystem (./images/).imagen.shexecutes thesecurityutility and spawns a Python process. - Sanitization: No local sanitization or validation of the prompt is performed; the skill relies entirely on the safety filters of the upstream Imagen API.
- Data Exposure & Exfiltration (SAFE): The skill utilizes the macOS Keychain via the
securitycommand to retrieve thegemini-api-key. This is a recommended security practice for managing secrets in a CLI context and prevents hardcoding credentials in configuration files. - Command Execution (SAFE): The
imagen.shscript executes a fixed Python path and a specific system utility (security). It does not provide a mechanism for arbitrary command injection.
Audit Metadata