vllm-server
Fail
Audited by Snyk on Mar 9, 2026
Risk Level: HIGH
Full Analysis
HIGH W007: Insecure credential handling detected in skill instructions.
- Insecure credential handling detected (high risk: 1.00). The prompt includes explicit examples that pass API keys and bearer tokens directly in command-line arguments and HTTP headers (e.g., --api-key your-secret-key, -H "Authorization: Bearer your-secret-key"), which requires the agent to output secret values verbatim.
MEDIUM W011: Third-party content exposure detected (indirect prompt injection risk).
- Third-party content exposure detected (high risk: 0.90). The SKILL.md explicitly instructs serving and pre-pulling models by name (e.g., "vllm serve meta-llama/...", "huggingface-cli download ") and uses HUGGING_FACE_HUB_TOKEN/model-cache, which means the agent will fetch and load models from the public Hugging Face Hub (untrusted user-uploaded content) that can directly change runtime behavior and outputs.
MEDIUM W012: Unverifiable external dependency detected (runtime URL that controls agent).
- Potentially malicious external URL detected (high risk: 0.80). The skill pulls and runs the remote Docker image "vllm/vllm-openai:latest" at runtime (docker run / docker-compose), which fetches and executes remote code and is a required dependency for operation, so it represents a runtime external executable dependency.
Audit Metadata