skills/trampoline-ai/predict-rlm/rlm/Gen Agent Trust Hub

rlm

Pass

Audited by Gen Agent Trust Hub on May 7, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [INDIRECT_PROMPT_INJECTION]: The skill generates agents designed to ingest and process external documents, which creates a surface for indirect prompt injection.
  • Ingestion points: The AnalyzeDocuments signature in agent/signature.py accepts documents: list[File] as untrusted input.
  • Boundary markers: The generated code templates do not include explicit delimiters or instructions to ignore embedded commands within the processed files.
  • Capability inventory: The PredictRLM runner executes dynamically generated Python code in a sandbox and can invoke host-side tools.
  • Sanitization: No sanitization or filtering is performed on the content of the input files before they are processed by the sub-LM.
  • [DYNAMIC_EXECUTION]: The skill's primary function is to build agents that write and execute Python code at runtime using the PredictRLM class.
  • Evidence: The documentation and implementation patterns (e.g., agent/service.py) describe an architecture where an outer LLM generates code to be executed within a sandboxed environment.
  • Mitigation: The execution environment is specified as a Pyodide (WASM) sandbox, which provides process-level isolation and limits host system access.
Audit Metadata
Risk Level
SAFE
Analyzed
May 7, 2026, 02:45 PM