autofix
Audited by Socket on Mar 2, 2026
1 alert found:
SecurityThis skill's stated purpose (automatically apply CodeRabbit review fixes) is plausible and the documented workflow aligns with that purpose, but the design contains strong supply-chain and prompt-injection risks. The directive to 'follow agent prompts literally' and to execute '🤖 Prompt for AI Agents' that are sourced from PR comments (untrusted external content) is the most significant issue: it enables arbitrary, attacker-controlled instructions to be executed as code changes, committed, and optionally pushed. Combined with an 'Auto-fix all' mode that can apply many changes without per-change human approval and the ability to run build/lint/test steps and post generated content back to GitHub, this creates viable paths for credential exposure, backdoor insertion, and propagation of malicious modifications. Mitigations would include requiring strict sanitization/validation of prompts, enforcing human approval for each change (no bulk auto-apply), limiting execution privileges, and preventing reading of sensitive credential files. Given these issues, treat the skill as medium-to-high risk for supply-chain abuse and credential/data leakage until stronger safeguards are added.