rlm-context-manager
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: CRITICALREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
- REMOTE_CODE_EXECUTION (CRITICAL): The architecture explicitly uses
pickleto persist state in${SKILLS_ROOT}/rlm-context-manager/state/state.pkl. Since pickle is inherently unsafe, an attacker who can influence the data being processed or write to the state directory can execute arbitrary code when the agent loads the state. - PROMPT_INJECTION (HIGH): The skill is highly vulnerable to Indirect Prompt Injection (Category 8). It ingests untrusted external content (logs, documents, codebases) and sends them to a sub-LLM (
rlm-subcall) to 'extract relevant information'. - Ingestion point:
/rlm init <context_path>reads external files into the context manager. - Boundary markers: No boundary markers or 'ignore' instructions are defined to separate the untrusted data from the agent's task instructions.
- Capabilities: The agent has
Bash,Write, andEditpermissions, meaning a successful injection can lead to full system compromise. - Sanitization: There is no evidence of sanitization or filtering of the ingested content.
- COMMAND_EXECUTION (HIGH): The skill relies heavily on
Bashto execute therlm_repl.pyscript. The use of heredocs (<<'PY') and command-line execution of user-controlled queries provides a path for command injection if input parameters are not strictly escaped. - DATA_EXFILTRATION (MEDIUM): The
BashandReadpermissions combined with the ability to initialize the REPL with any file path via/rlm initallows for the exposure of sensitive local files (e.g., credentials or SSH keys) if the agent is manipulated by prompt injection.
Recommendations
- AI detected serious security threats
Audit Metadata