nano-banana-pro
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- PROMPT_INJECTION (HIGH): Indirect Prompt Injection vulnerability.
- Ingestion points: The
--promptand--input-imagearguments in 'scripts/generate_image.py' ingest untrusted external data. - Boundary markers: Absent. The user-provided prompt is passed directly to the 'google-genai' client without delimiters or protective instructions.
- Capability inventory: The script has the ability to write files to the local disk ('image.save'), read local files ('PILImage.open'), and make external network calls to the Google Gemini API.
- Sanitization: Absent. There is no sanitization of the prompt or validation of the input images.
- COMMAND_EXECUTION (HIGH): Arbitrary File Write via Path Traversal.
- Evidence: In 'scripts/generate_image.py', the '--filename' argument is used directly in 'Path(args.filename)' and 'image.save(str(output_path))'.
- Risk: An attacker could exploit this by tricking the agent into specifying a filename like '
/.ssh/authorized_keys' or '/.bashrc', allowing the attacker to overwrite critical system configuration files and potentially gain persistent access or execute code. - EXTERNAL_DOWNLOADS (LOW): Use of external dependencies from trusted sources.
- Evidence: The script installs 'google-genai', 'pillow', and 'python-dotenv' via 'uv'.
- Status: These findings are downgraded to LOW/INFO per [TRUST-SCOPE-RULE] as they are sourced from a trusted organization (Google) and reputable package registries (PyPI).
Recommendations
- AI detected serious security threats
Audit Metadata