llm-advisor

Pass

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: SAFEDATA_EXFILTRATIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [DATA_EXFILTRATION]: Instructions facilitate the transmission of local project data to external third-party services.\n
  • Specifically, it suggests piping sensitive files like source code (src/auth.ts), version control history (git diff), and error logs to external LLM providers via the llm command-line utility.\n- [COMMAND_EXECUTION]: The skill relies on shell command execution for installation and core functionality.\n
  • It uses system commands to install the necessary tools (pip install, brew install) and to execute the llm binary for querying models.\n- [EXTERNAL_DOWNLOADS]: Downloads software and plugins from public package repositories.\n
  • Fetches the llm package and its associated plugins (e.g., llm-gemini) from standard registries like PyPI and Homebrew.\n- [PROMPT_INJECTION]: Potential surface for indirect prompt injection via the processing of untrusted local data.\n
  • Ingestion points: Reads data directly from source files, logs, and git diffs (e.g., src/auth.ts, error.log).\n
  • Boundary markers: Prompts lack explicit delimiters to separate the data being analyzed from the instructions provided to the external model.\n
  • Capability inventory: The agent has the ability to execute shell commands and interact with external network services via the llm tool.\n
  • Sanitization: Content from the ingested files is passed directly to the external model without validation or escaping of potential embedded instructions.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 1, 2026, 10:44 PM