paper-explain-figures

Pass

Audited by Gen Agent Trust Hub on Apr 3, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill is subject to indirect prompt injection as it incorporates untrusted data from user-provided notes and research paper source code snippets into the instructions sent to AI models. Ingestion points: scripts/paper_explain_figures.py reads CLI notes and extracts code from local files. Boundary markers: Content is interpolated directly into the instructions in _render_worker_prompt without specific delimiters. Capability inventory: Subprocess calls for image conversion and AI analysis runners (codex and claude). Sanitization: Implements a _redact_secrets function to mask API keys, AWS credentials, and private keys in code snippets before they are processed.
  • [COMMAND_EXECUTION]: The main script executes system utilities (sips, magick) and AI agent runners (codex, claude) via list-based subprocess calls. It employs strong isolation by redirecting runtime environment variables (HOME, TMP, XDG_CACHE_HOME, etc.) to a local job directory and performs post-execution workspace audits to clean up any unauthorized files. It uses flags such as --dangerously-skip-permissions (claude) and --ask-for-approval never (codex) to enable non-interactive automation of the runner tools.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 3, 2026, 04:10 PM