rlm
Fail
Audited by Gen Agent Trust Hub on Feb 27, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The README.md recommends an installation method where a remote shell script is piped directly into a bash interpreter (curl -fsSL ... | bash). This allows the execution of unverified code with user-level privileges.- [EXTERNAL_DOWNLOADS]: The install.sh script downloads the core logic (rlm.py) and skill definitions from an external GitHub repository (BowTiedSwan/rlm-skill) that is not identified as a trusted source.- [COMMAND_EXECUTION]: The skill requires the execution of a locally downloaded Python engine (rlm.py) and utilizes standard shell utilities like find and grep to perform its primary functions.- [DATA_EXFILTRATION]: The Python component rlm.py performs broad filesystem scanning using recursive glob patterns (**/*). It lacks filters for sensitive directories such as .ssh or .aws, and files like .env, potentially exposing credentials or private keys to the agent context during automated scans.- [PROMPT_INJECTION]: The skill architecture is susceptible to indirect prompt injection because it reads raw content from local files and injects it into prompts for background agents without sanitization or boundary markers.
- Ingestion points: Local files read via rlm.py.
- Boundary markers: Absent; no delimiters or instructions to ignore embedded commands are present in the SKILL.md protocol.
- Capability inventory: Orchestrates background_task and shell commands based on file analysis.
- Sanitization: None; file content is processed as raw text.
Recommendations
- HIGH: Downloads and executes remote code from: https://raw.githubusercontent.com/BowTiedSwan/rlm-skill/main/install.sh - DO NOT USE without thorough review
- AI detected serious security threats
Audit Metadata