llm-advisor
Fail
Audited by Gen Agent Trust Hub on Feb 15, 2026
Risk Level: HIGHDATA_EXFILTRATIONPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
- [Data Exposure & Exfiltration] (HIGH): The skill documentation explicitly instructs the agent to read potentially sensitive files (e.g.,
src/auth.ts,error.log) and transmit their content to external LLM APIs using thellmCLI. This represents a significant risk of intellectual property exposure. - [Prompt Injection] (HIGH): The skill is highly vulnerable to Indirect Prompt Injection (Category 8). Ingestion point: Local source files and git diffs are piped into the
llmtool. Boundary markers: Absent. Capability inventory: The skill's output is intended to guide complex debugging and architectural decisions. Sanitization: None. - [External Downloads] (MEDIUM): The skill requires installing the
llmCLI and plugins from third-party sources (PyPI/Homebrew). These are not within the defined trusted organizations list. - [Command Execution] (MEDIUM): The skill utilizes shell piping to transmit local data to network-connected tools, which is the core mechanism for the identified exposure risks.
Recommendations
- AI detected serious security threats
Audit Metadata