consultant
Fail
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill documentation includes instructions to download and install the 'uv' package manager directly from its official domain (astral.sh) using a shell script. This is a standard installation method for a well-known developer tool.
- [DATA_EXFILTRATION]: The skill is designed to read local files and transmit their contents to external LLM providers via LiteLLM. While this is the intended primary purpose, users should be cautious when including sensitive files (like .env or private keys) in their analysis requests, as that data will be sent to the configured AI service.
- [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface (Category 8) because it ingests untrusted content from local files and interpolates it into the system prompt.
- Ingestion points: Files are read in
file_handler.pyvia theprocess_filesmethod. - Boundary markers: The skill uses markdown headers and code blocks in
file_handler.py:build_prompt_with_referencesto separate file content from instructions, which provides basic structure but does not prevent adversarial instructions within the files from influencing the model. - Capability inventory: The skill communicates with external APIs using
litellm_client.pyand manages background processes for long-running consultations. - Sanitization: There is no evidence of content sanitization or specific instructions to the model to ignore potential commands embedded within the analyzed files.
Recommendations
- HIGH: Downloads and executes remote code from: https://astral.sh/uv/install.sh - DO NOT USE without thorough review
Audit Metadata