llm-cli
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- Indirect Prompt Injection (LOW): The skill processes untrusted data from external files (including text, PDF, and base64-encoded media) and interpolates it directly into LLM prompts, which could allow instructions embedded in those files to hijack the agent's behavior.
- Ingestion points:
input_handler.pyreads data fromstdinor user-specified file paths through theload_inputandload_filemethods. - Boundary markers: Absent. In
executor.py, the skill concatenates prompts and content using simple string formatting (f"{prompt}\n\n{content}") without delimiters or instructions to ignore embedded commands. - Capability inventory: The skill uses
subprocess.runandsubprocess.Popeninexecutor.pyto execute the externalllmCLI tool with the processed content. - Sanitization: No sanitization, escaping, or validation is performed on the content read from files before it is sent to the LLM.
- Command Execution (SAFE): The skill invokes the external
llmCLI tool to process data. It uses thesubprocessmodule with arguments passed as a list (avoidingshell=True), which prevents shell injection. Additionally, theselect_modellogic inllm_skill.pyvalidates model identifiers against an internal allowlist inmodels.py, ensuring only recognized models are executed.
Audit Metadata