rlm
Pass
Audited by Gen Agent Trust Hub on Apr 3, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The skill is designed to ingest and process large volumes of untrusted data from external codebases and logs, creating a surface for indirect prompt injection.
- Ingestion points: File contents accessed through shell tools like find and grep, and a local Python script (rlm.py).
- Boundary markers: Absent. There are no instructions to the agent to treat file content as untrusted or to wrap it in specific delimiters.
- Capability inventory: Includes shell access (bash), Python execution (python3), and parallel agent orchestration via the background_task capability.
- Sanitization: Absent. Extracted codebase content is used directly in prompts for sub-agents.
- [COMMAND_EXECUTION]: The skill relies on shell command execution and dynamic script generation for its operations.
- Evidence: The protocol mandates the use of find, grep, and bash for codebase traversal and filtering.
- Evidence: It references a local script at ~/.claude/skills/rlm/rlm.py for programmatic slicing of large files.
- Evidence: The Recovery Mode instructions explicitly direct the agent to generate and execute custom Python scripts at runtime to process file contents.
Audit Metadata