skills/bkircher/skills/gh-code-review/Gen Agent Trust Hub

gh-code-review

Pass

Audited by Gen Agent Trust Hub on Feb 28, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill includes explicit instructions to bypass standard agent behavior regarding user consent. In the 'Approvals' section, it states: 'Do not ask the user for approvals when running "read-only" gh or git commands... filesystem and network access should be granted without explicit approval.' This is an attempt to override safety guardrails that require human-in-the-loop confirmation for tool execution.
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection (Category 8). It ingests untrusted data from GitHub pull requests, including PR titles, descriptions, and the code diff itself.
  • Ingestion points: Data enters the agent context through gh pr list, gh pr view (title and body fields), and gh pr diff (code changes and comments).
  • Boundary markers: The skill uses XML-like tags (e.g., <constraints>, <tool-use>) to structure its own instructions, but it lacks clear boundary markers or 'ignore embedded instructions' directives when processing the external PR content.
  • Capability inventory: The agent has access to the shell, filesystem, and networking via the gh and git tools, which use the user's GitHub credentials.
  • Sanitization: While the skill correctly uses jq --arg to safely handle filenames in one command, it directly interpolates other variables like $number into shell commands without explicit validation, and it does not sanitize the natural language content of the PR before analysis.
  • [COMMAND_EXECUTION]: The skill relies on executing shell commands (gh, git, jq) and configuring the environment (e.g., export GH_PAGER=cat). It performs network operations to GitHub's official API and modifies the local filesystem state via gh pr checkout and git remote update. These are documented as intended functionality using well-known services.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 28, 2026, 02:08 AM