llava

Warn

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • EXTERNAL_DOWNLOADS (MEDIUM): The skill clones the LLaVA repository from GitHub (https://github.com/haotian-liu/LLaVA). This source is not part of the trusted whitelist, presenting a risk if the repository is compromised.
  • COMMAND_EXECUTION (MEDIUM): The instructions include executing shell scripts (e.g., pretrain.sh, finetune.sh) and using DeepSpeed for training. These actions execute arbitrary code contained within the downloaded scripts.
  • REMOTE_CODE_EXECUTION (MEDIUM): Pre-trained models are loaded directly from Hugging Face (liuhaotian/llava-v1.5-7b). Loading model weights and configurations often involves executing code that could be malicious if the model hub entry is tampered with.
  • PROMPT_INJECTION (LOW): The skill is vulnerable to Indirect Prompt Injection through processed images. 1. Ingestion points: Image files in SKILL.md and training data in references/training.md. 2. Boundary markers: Absent; uses simple string concatenation for prompts. 3. Capability inventory: Includes shell script execution and subprocess calls. 4. Sanitization: No sanitization or input validation logic is present for the image data or external queries.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 17, 2026, 06:26 PM