rlm-init
Pass
Audited by Gen Agent Trust Hub on Mar 9, 2026
Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The skill requires the agent to execute internal Python scripts (
inventory.py,distiller.py,inject_summary.py) to manage the project cache and perform file audits. These operations are restricted to the local environment and are necessary for the skill's primary function of cache management. - [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection (Category 8) due to its core distillation workflow.
- Ingestion points: Files are read from the local filesystem based on user-defined manifests in
distiller.pyandinject_summary.py. - Boundary markers: The prompt templates in
resources/prompts/rlm/(e.g.,rlm_summarize_general.md) interpolate the raw{content}of files directly into the LLM prompt without using delimiters or instructions to ignore embedded commands. - Capability inventory: The skill has the capability to write to the filesystem (JSON cache files) and execute local subprocesses via the provided Python scripts.
- Sanitization: There is no evidence of input validation, escaping, or filtering of the file content before it is processed by the LLM.
Audit Metadata