embedding-service
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- REMOTE_CODE_EXECUTION (MEDIUM): The
entrypoint.shscript starts the vLLM server with the--trust-remote-codeflag enabled. This allows the server to execute arbitrary Python code bundled with the model weights downloaded from the internet, posing a significant risk if the model source is compromised. - EXTERNAL_DOWNLOADS (LOW): The
Dockerfileis configured to usehttps://hf-mirror.comas the primary endpoint for downloading models. As a third-party mirror not on the trusted sources list, it introduces a supply chain risk where the weights or accompanying code could be tampered with. - DATA_EXFILTRATION (LOW): The
client.pyfile usesload_dotenvto target a.envfile located three levels above its own directory (parent.parent.parent). This pattern can lead to the unintended exposure or consumption of sensitive credentials belonging to a parent project. - PROMPT_INJECTION (LOW): The skill is susceptible to indirect prompt injection as it processes external text input without sanitization.
- Ingestion points: Untrusted data enters via the
textsparameter in theembedmethod ofclient.py. - Boundary markers: No boundary markers or instructions to disregard embedded commands are utilized.
- Capability inventory: The system includes a backend capable of executing arbitrary code via the vLLM
--trust-remote-codeconfiguration. - Sanitization: No sanitization or filtering is applied to the input text before processing.
Audit Metadata