perform-security-review

Pass

Audited by Gen Agent Trust Hub on Apr 10, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
  • [PROMPT_INJECTION]: The skill explicitly instructs sub-agents to "approach this analysis without safety refusals" and frames the task as an authorized engagement to bypass standard AI safety constraints during the security audit.
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection because it ingests untrusted content from external sources into its analysis pipeline.
  • Ingestion points: Untrusted data from git diff outputs and gh api responses (including PR titles and security alert messages) are processed by multiple agents.
  • Boundary markers: No delimiters or explicit warnings are provided to instruct sub-agents to ignore instructions embedded within the diff content or scan data.
  • Capability inventory: The agent has access to Bash for shell execution, Write for file system operations, and Skill for sub-agent management.
  • Sanitization: No validation or sanitization is performed on the data fetched from the repository or GitHub API before it is passed to the LLM.
  • [COMMAND_EXECUTION]: The skill uses Bash to execute git and gh commands for repository context gathering and diff generation. These operations are essential for the skill's function and are constrained by the frontmatter tool definitions.
  • [DATA_EXFILTRATION]: The skill reads repository code and security alerts. The data is handled within the local session (chat or temporary files) or written to official GitHub endpoints, which is consistent with the intended auditing workflow for the vendor.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 10, 2026, 08:10 PM