openai-image

Pass

Audited by Gen Agent Trust Hub on Mar 23, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill provides comprehensive and legitimate functionality for image processing. No malicious behavior or security risks were detected across the analyzed files.
  • [DATA_EXPOSURE]: The script correctly handles sensitive information by retrieving the OPENAI_API_KEY from the environment rather than hardcoding it. This is the standard and recommended security practice for API-based tools.
  • [COMMAND_EXECUTION]: All operations are performed using the official openai Python SDK and standard library modules. There is no evidence of arbitrary shell command execution or unsafe handling of user input in a shell context.
  • [INDIRECT_PROMPT_INJECTION]: The skill ingests untrusted data through JSON batch manifests and local images. While these represent an injection surface, the risk is mitigated as the content is strictly passed to specialized image/vision models. Basic HTML escaping is also implemented when generating the local gallery file to prevent simple XSS if the output is viewed in a browser.
  • [DEPENDENCIES]: The skill relies solely on the well-known and trusted openai Python package. No suspicious or unverified dependencies are present.
  • [PRIVILEGE_ESCALATION]: The script operates within user-space, creating a hidden directory (~/.openai_image) for logging errors, which is a standard practice for development tools and does not represent an escalation risk.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 23, 2026, 10:45 PM