vllm-studio-backend
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [Indirect Prompt Injection] (HIGH): The backend architecture described in the skill is designed to process untrusted chat messages while possessing high-privilege tool execution capabilities.
- Ingestion points: Untrusted user input enters the system via the
chat_messagesobject in the/v1/chat/completionsand/chats/:id/turnendpoints (referenced inreferences/backend-architecture.md). - Boundary markers: The documentation does not describe the use of delimiters or boundary markers to isolate untrusted data from instructions within the Pi-mono agent loop.
- Capability inventory: The system executes tools server-side, including AgentFS (file system access) and MCP (Model Context Protocol), which provide a significant impact if subverted (referenced in
SKILL.mdandreferences/backend-architecture.md). - Sanitization: There is no mention of sanitization, validation, or filtering of the input data before it is interpolated into the agent's reasoning or tool-calling logic.
- [Command Execution] (LOW): The skill contains commands for local service management and health monitoring.
- Evidence:
references/backend-commands.mdprovides shell commands for starting the controller (./start.sh) and performing network tests withcurl. These are standard development operations but represent local execution capabilities.
Recommendations
- AI detected serious security threats
Audit Metadata