NYC

gpt-image-1-5

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [DATA_EXFILTRATION] (LOW): The script scripts/generate_image.py accepts an OpenAI API key through a command-line argument (--api-key). This practice is insecure as command-line arguments are visible to other users on the system via process monitoring tools like ps and are often saved in shell history files.
  • [DATA_EXFILTRATION] (LOW): The script performs network operations to the OpenAI API, which is not on the explicit whitelist of trusted exfiltration-check domains. Additionally, it allows arbitrary file writing via the --filename parameter. The script uses Path(args.filename).parent.mkdir(parents=True, exist_ok=True) without validating that the path remains within the user's intended workspace, which could permit path traversal if the filename is manipulated (e.g., ../../target_file).
  • [PROMPT_INJECTION] (LOW): The skill possesses a surface for Indirect Prompt Injection (Category 8) by interpolating untrusted user input into API prompts.
  • Ingestion points: User instructions are ingested via the --prompt argument in scripts/generate_image.py.
  • Boundary markers: Absent. The user-provided prompt is passed directly to the model without delimiters or instructions to ignore embedded commands.
  • Capability inventory: The skill can perform network requests to external APIs and write files to the local file system.
  • Sanitization: Absent. There is no logic to sanitize the prompt or validate the output filename against directory traversal patterns.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:39 PM