code-review-ai-ai-review
Audited by Socket on Feb 15, 2026
1 alert found:
Malware[Skill Scanner] Detected system prompt override attempt All findings: [CRITICAL] prompt_injection: Detected system prompt override attempt (PI004) [AITech 1.1] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] This skill appears functionally legitimate for automated AI-assisted code review. There is no evidence of embedded malware or obfuscated malicious code. The primary security concern is data exposure: the skill routinely bundles PR diffs, static analysis outputs (which may include secrets detected by trufflehog) and environment-provided secrets into prompts sent to third-party LLMs (Anthropic, OpenAI). This introduces a real risk of sensitive code or secrets being sent to external providers and stored/processed outside the organization. Additional risks: lack of explicit redaction/minimization, no strong validation of LLM outputs before auto-posting to GitHub, and unclear guidance on least-privilege tokens. Recommend adding explicit redaction of secrets, prompt/data minimization, strict token scope recommendations, validation/sanitization of LLM outputs, and operator consent/notice before sending sensitive data to third parties. LLM verification: This code fragment describes a legitimate AI-assisted code-review orchestration skill and contains no clear in-line malware or obfuscated payloads. The dominant security concern is operational: the design demonstrates multiple, straightforward paths for sensitive repository data and secrets to be transmitted to external AI services and SaaS scanners if implementers do not add redaction, prompt-sanitization, credential safeguards, and deployment policies. Recommend that implementers add pre-send