dev-rlm

Pass

Audited by Gen Agent Trust Hub on Mar 11, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is subject to indirect prompt injection risks because it extracts and analyzes content from external files.\n
  • Ingestion points: The rlm.py script (commands like read, grep, and peek) reads raw content from the local filesystem.\n
  • Boundary markers: The instructions for sub-agents (e.g., in SKILL.md) do not specify the use of delimiters or 'ignore' instructions to prevent the agent from obeying commands embedded in the codebase files.\n
  • Capability inventory: The skill can execute Python scripts, spawn multiple sub-agents, and perform file operations in the /tmp directory.\n
  • Sanitization: No sanitization or validation of the ingested file content is performed before it is passed to the agent's context.\n- [COMMAND_EXECUTION]: The skill relies on the execution of several local tools and scripts.\n
  • Evidence: It frequently calls a local Python script (rlm.py) and utilizes system utilities like jq, rg, grep, find, and tree to index and process codebase data.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 11, 2026, 10:29 AM