github-review-workflow

Pass

Audited by Gen Agent Trust Hub on Apr 30, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: Indirect Prompt Injection vulnerability via ingested GitHub PR data. The skill's workflow is centered around acting upon external comments, which are untrusted and can contain malicious instructions.
  • Ingestion points: External data enters the environment through scripts/export_github_review_comments.py, which fetches pull request comments from the GitHub API and stores them in markdown files within the todo/ directory of the local project.
  • Boundary markers: The exported review files generated by render_thread_file use standard Markdown headers to organize content but lack explicit security delimiters or system-level instructions to the processing agent to ignore embedded commands or prompt-like patterns within the comments.
  • Capability inventory: The agents and sub-agents (workers) utilized by this workflow have access to powerful tools, including Bash, Write, and Edit, which could be exploited if an agent accidentally follows instructions embedded in a malicious PR comment.
  • Sanitization: The scripts/comment_formatters.py utility cleans up metadata and noise from comments but does not implement sanitization or filtering to detect or neutralize instructional content intended to hijack agent logic.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 30, 2026, 01:33 PM