receiving-code-review
Pass
Audited by Gen Agent Trust Hub on Feb 18, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [Prompt Injection] (SAFE): The skill provides behavioral guidelines for the AI to maintain technical rigor and avoid performative politeness. It does not contain instructions to bypass safety filters, extract system prompts, or disregard core constraints.
- [Indirect Prompt Injection] (LOW): The skill is designed to process external code review feedback, which constitutes a vulnerability surface for indirect prompt injection. 1. Ingestion points: External reviewer feedback and GitHub thread comments. 2. Boundary markers: Absent; there are no specific delimiters or instructions to treat external feedback as untrusted data. 3. Capability inventory: The agent can read files (grep), modify the codebase (implementation), and interact with network APIs (gh api). 4. Sanitization: Absent; the skill does not specify any sanitization or validation of the ingested feedback.
- [Command Execution] (SAFE): The skill references the use of 'grep' and the GitHub CLI ('gh api'). These are legitimate tools for its stated purpose of managing code reviews and do not include patterns for arbitrary command execution or privilege escalation.
- [Data Exposure & Exfiltration] (SAFE): No hardcoded credentials or patterns for sensitive data exfiltration were detected. The GitHub API usage is targeted at PR management within the local context.
Audit Metadata