dev-rlm
Pass
Audited by Gen Agent Trust Hub on Mar 14, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The skill processes data from analyzed codebases, creating a surface for indirect prompt injection if the files contain malicious instructions. * Ingestion points: scripts/rlm.py reads file content using the grep, peek, and read functions. * Boundary markers: The skill includes instructions to the agent to avoid raw ingestion of worker outputs and to use structured JSON reporting to mitigate accidental execution of embedded instructions. * Capability inventory: The agent executes local scripts and uses the jq utility for data manipulation and result aggregation. * Sanitization: No automated content filtering or instruction escaping is performed on ingested file data before it is processed by the agent.
- [COMMAND_EXECUTION]: The skill relies on executing a local Python script and the jq utility for core functionality. * Evidence: SKILL.md and quick-reference.md provide numerous command examples for indexing, searching, and partitioning codebase files using the dev-rlm.py script.
Audit Metadata