skills/glebis/claude-skills/llm-cli/Gen Agent Trust Hub

llm-cli

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • Indirect Prompt Injection (LOW): The skill processes untrusted data from external files (including text, PDF, and base64-encoded media) and interpolates it directly into LLM prompts, which could allow instructions embedded in those files to hijack the agent's behavior.
  • Ingestion points: input_handler.py reads data from stdin or user-specified file paths through the load_input and load_file methods.
  • Boundary markers: Absent. In executor.py, the skill concatenates prompts and content using simple string formatting (f"{prompt}\n\n{content}") without delimiters or instructions to ignore embedded commands.
  • Capability inventory: The skill uses subprocess.run and subprocess.Popen in executor.py to execute the external llm CLI tool with the processed content.
  • Sanitization: No sanitization, escaping, or validation is performed on the content read from files before it is sent to the LLM.
  • Command Execution (SAFE): The skill invokes the external llm CLI tool to process data. It uses the subprocess module with arguments passed as a list (avoiding shell=True), which prevents shell injection. Additionally, the select_model logic in llm_skill.py validates model identifiers against an internal allowlist in models.py, ensuring only recognized models are executed.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:12 PM