llava

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • EXTERNAL_DOWNLOADS (HIGH): The skill directs the user to clone a repository from 'https://github.com/haotian-liu/LLaVA'. This repository and its author are not included in the 'Trusted GitHub Organizations' list, making the source unverifiable and high-risk.
  • REMOTE_CODE_EXECUTION (HIGH): The skill implements a 'download then execute' pattern. It clones an untrusted external repository and executes its contents via 'pip install -e .' followed by script execution ('python -m llava.serve.cli'). It also loads pre-trained model weights from Hugging Face ('liuhaotian/llava-v1.5-7b'), which creates a risk of code execution through unsafe 'pickle' deserialization.
  • COMMAND_EXECUTION (MEDIUM): The skill extensively uses shell commands for installation and operation, including 'deepspeed' and 'bash' scripts for training. These provide a high-privilege attack surface if the downloaded repository is compromised.
  • PROMPT_INJECTION (LOW): The skill is vulnerable to indirect prompt injection via image and text inputs. Evidence:
  • Ingestion points: 'SKILL.md' ('Image.open', 'query' text).
  • Boundary markers: Absent. There are no delimiters or 'ignore' instructions to prevent adversarial commands embedded in images from being interpreted by the model.
  • Capability inventory: 'SKILL.md' (subprocess execution via CLI), 'training.md' (bash/deepspeed script execution).
  • Sanitization: Absent. Untrusted external image data is processed without validation or filtering.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 04:58 PM