team-review

Pass

Audited by Gen Agent Trust Hub on Apr 17, 2026

Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The code modification verification step in roles/fixer/role.md executes project-specific test suites such as pytest, jest, and npx tsc using the Bash tool. A malicious project could include configurations designed to execute arbitrary code during these standard verification steps.
  • [PROMPT_INJECTION]: The skill architecture is susceptible to indirect prompt injection (Category 8) because it ingests untrusted codebase data into LLM prompts. Malicious code could influence the analysis or fixing logic performed by the sub-agents.
  • Ingestion points: Files are read using Glob and Read tools in the scanner and reviewer roles.
  • Boundary markers: Absent. Source code is interpolated into prompts for the maestro delegate tool without robust delimiters or isolation instructions.
  • Capability inventory: The skill possesses significant capabilities, including Edit (file modification) and Bash (command execution).
  • Sanitization: LLM-generated findings and fixes are not sanitized or validated before being used to modify the project or run verification commands.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 17, 2026, 01:12 AM