vllm-ascend

Fail

Audited by Gen Agent Trust Hub on Mar 1, 2026

Risk Level: HIGHPROMPT_INJECTIONREMOTE_CODE_EXECUTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: The automated scan flagged a command in scripts/deploy_service.sh where a health check response is piped to python3. The actual command is curl -s "http://localhost:$PORT/health" | python3 -m json.tool, which uses the standard Python JSON module for formatting and is not an execution of the response as code.
  • [REMOTE_CODE_EXECUTION]: The skill scripts utilize the --trust-remote-code flag (e.g., in scripts/benchmark.py and scripts/deploy_service.sh). This is a legitimate vLLM feature for loading certain models but it allows execution of Python code bundled with the model weights.
  • [INDIRECT_PROMPT_INJECTION]: The skill features a surface for indirect prompt injection as it processes untrusted prompt data.
  • Ingestion points: Prompts are processed through the llm.generate method and the OpenAI-compatible server.
  • Boundary markers: The scripts do not implement specific delimiters or instructions to ignore embedded commands.
  • Capability inventory: The skill manages inference subprocesses and network service ports.
  • Sanitization: There is no specific input validation or sanitization implemented in the provided skill scripts.
Recommendations
  • HIGH: Downloads and executes remote code from: http://localhost:$PORT/health - DO NOT USE without thorough review
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 1, 2026, 11:03 AM