serving-llms-vllm
Warn
Audited by Snyk on Mar 28, 2026
Risk Level: MEDIUM
Full Analysis
MEDIUM W011: Third-party content exposure detected (indirect prompt injection risk).
- Third-party content exposure detected (high risk: 0.90). The skill instructs users to fetch and serve third-party models from public hubs (e.g., "Search HuggingFace", examples like TheBloke/... and "Use pre-quantized models from HuggingFace") and even documents using --trust-remote-code, so it clearly ingests untrusted, user-provided content from the open web (HuggingFace) as part of its runtime workflow.
MEDIUM W012: Unverifiable external dependency detected (runtime URL that controls agent).
- Potentially malicious external URL detected (high risk: 0.90). Yes — the skill runs vllm serve on remote model repos (e.g., TheBloke/Llama-2-70B-AWQ => https://huggingface.co/TheBloke/Llama-2-70B-AWQ), which are fetched at runtime and (per the docs' --trust-remote-code guidance) can include and execute remote repository code that directly controls model behavior.
Issues (2)
W011
MEDIUMThird-party content exposure detected (indirect prompt injection risk).
W012
MEDIUMUnverifiable external dependency detected (runtime URL that controls agent).
Audit Metadata