ralphmode
Warn
Audited by Gen Agent Trust Hub on Mar 11, 2026
Risk Level: MEDIUMPROMPT_INJECTIONCOMMAND_EXECUTIONSAFE
Full Analysis
- [PROMPT_INJECTION]: The skill provides instructions and configuration templates (such as 'bypassPermissions' and '--yolo') that are explicitly designed to remove security constraints and bypass human-in-the-loop approval processes in various AI CLI tools.\n
- Evidence: The references/permission-profiles.md file contains JSON and TOML snippets that set 'defaultMode' to 'bypassPermissions' and 'approval_policy' to 'never'.\n
- Evidence: SKILL.md provides examples using dangerous flags like '--dangerously-bypass-approvals-and-sandbox'.\n- [COMMAND_EXECUTION]: The skill provides shell scripts for use in tool hooks (PreToolUse, BeforeTool) that execute local shell commands and parse JSON via python3.\n
- Evidence: The scripts ralph-safety-check.sh and ralph-tier1-check.sh are provided to be used as executable hooks, which pipe tool inputs into python and grep.\n- [SAFE]: The external documentation links provided in the skill target well-known and trusted platforms including Anthropic, OpenAI, Google Gemini, and GitHub.
Audit Metadata