image-analysis
Warn
Audited by Gen Agent Trust Hub on Apr 1, 2026
Risk Level: MEDIUMDATA_EXFILTRATIONPROMPT_INJECTIONCREDENTIALS_UNSAFE
Full Analysis
- [DATA_EXFILTRATION]: The skill implements a mechanism that reads the full content of local files and transmits them to external APIs. In
scripts/analyze_image.py, the_image_to_data_urlfunction reads bytes from a path provided via the--imageargument. There is no validation to ensure the file is actually an image or resides within an expected directory, meaning sensitive files (such as SSH keys or configuration files) could be read and exfiltrated to an LLM provider if the agent is manipulated into targeting them.\n- [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection through text contained within the images it analyzes.\n - Ingestion points: Local image files provided to
scripts/analyze_image.py.\n - Boundary markers: Absent. The prompt construction in
analyze_imagedoes not use delimiters or instructions to the LLM to ignore potentially adversarial text found inside the image.\n - Capability inventory: The skill possesses file-read capabilities and makes authenticated network requests to LLM providers.\n
- Sanitization: None. The script does not perform OCR pre-scanning or content filtering on the images before sending them to the model.\n- [CREDENTIALS_UNSAFE]: The script
scripts/analyze_image.pyprogrammatically accesses the project's internal configuration manager (middleware.config.config_manager) to retrieve sensitive information, including API keys, base URLs, and extra headers used for LLM authentication.
Audit Metadata