skills/seckatie/katies-ai-skills/llm/Gen Agent Trust Hub

llm

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • COMMAND_EXECUTION (HIGH): The skill includes an 'llm install' command, which acts as a wrapper for 'pip install'. This allows the installation of arbitrary Python packages into the environment, which can execute code during installation or runtime.
  • REMOTE_CODE_EXECUTION (HIGH): The '--functions' argument allows users to provide and execute arbitrary Python code or point to local Python files. Additionally, the tool supports 'tools' (function calling), which enables the LLM to trigger Python functions based on its output.
  • CREDENTIALS_UNSAFE (MEDIUM): The skill manages API keys for multiple providers (OpenAI, Anthropic, Gemini, etc.) and stores them in a local 'keys.json' file. While intended for convenience, this file contains plain-text secrets that are accessible to any plugin or script with file system access.
  • EXTERNAL_DOWNLOADS (MEDIUM): The tool can fetch content from arbitrary URLs via the '--fragment' option and can download model files or plugins from external sources during use.
  • COMMAND_EXECUTION (LOW): The documentation build script ('docs/conf.py') utilizes 'subprocess.Popen' with 'shell=True' to execute a git command. While the command is static, the use of 'shell=True' is a security anti-pattern.
  • INDIRECT_PROMPT_INJECTION (LOW): The skill processes untrusted data from URLs, local files, and attachments. When combined with the tool's capabilities (like Python function execution), this creates a 'lethal trifecta' risk as explicitly warned in the documentation. 1. Ingestion points: '-f/--fragment' (URLs/files), '-a/--attachment', and stdin piping. 2. Boundary markers: None enforced; the documentation warns users but the tool does not strictly sanitize inputs. 3. Capability inventory: Execution of Python functions via '--functions', plugin hooks, and tool-calling; local file reads; network requests. 4. Sanitization: No automated sanitization of external content is mentioned; safety depends on the underlying LLM and user caution.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 06:29 PM