parlor-on-device-ai

Fail

Audited by Gen Agent Trust Hub on Apr 7, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: Fetches and executes the uv package manager installation script from its official domain (astral.sh) using a shell pipe command (curl -LsSf https://astral.sh/uv/install.sh | sh).
  • [EXTERNAL_DOWNLOADS]: Clones a code repository from an unverified GitHub user (fikrikarim/parlor) and downloads AI model weights (~2.6 GB) from Hugging Face (google/gemma-4-E2B-it) during initialization.
  • [COMMAND_EXECUTION]: Instructs the agent to synchronize dependencies and execute server-side scripts (uv sync, uv run server.py, uv run benchmarks/bench.py) from the downloaded repository.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection. 1. Ingestion points: Untrusted audio PCM and JPEG frames enter the system via the WebSocket /ws endpoint (SKILL.md). 2. Boundary markers: Absent; no delimiters or instructions to ignore embedded instructions are shown in the configuration. 3. Capability inventory: The system performs multimodal AI inference (run_gemma_inference) and generates audio response via TTS (run_tts) based on the resulting text. 4. Sanitization: Absent; incoming binary media data is processed directly by the inference engine without validation.
Recommendations
  • HIGH: Downloads and executes remote code from: https://astral.sh/uv/install.sh - DO NOT USE without thorough review
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 7, 2026, 03:12 AM