serving-llms-vllm
Warn
Audited by Snyk on Feb 15, 2026
Risk Level: MEDIUM
Full Analysis
MEDIUM W011: Third-party content exposure detected (indirect prompt injection risk).
- Third-party content exposure detected (high risk: 1.00). The skill instructs vLLM to download and load models and tokenizers from public model hubs (e.g., HuggingFace and third-party repos like TheBloke) and even recommends using --trust-remote-code, meaning the agent will fetch and execute untrusted, user-provided model artifacts from the open web (e.g., HuggingFace/TheBloke), enabling indirect prompt injection.
MEDIUM W012: Unverifiable external dependency detected (runtime URL that controls agent).
- Potentially malicious external URL detected (high risk: 0.90). vLLM will fetch models from Hugging Face model repositories at runtime (e.g. https://huggingface.co/TheBloke/Llama-2-70B-AWQ) and the docs explicitly recommend the --trust-remote-code flag, which allows executing remote repository code—constituting a runtime external dependency that can execute code.
Audit Metadata