llama-cpp

Pass

Audited by Gen Agent Trust Hub on Mar 2, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The script scripts/convert_lora_to_gguf.py clones the official llama.cpp repository from GitHub (https://github.com/ggerganov/llama.cpp.git) to obtain necessary conversion scripts. This is documented as an intended feature for handling HuggingFace LoRA adapters and uses a well-known, trusted source.
  • [REMOTE_CODE_EXECUTION]: The skill performs an automated installation of the gguf Python package from the cloned llama.cpp source and executes the downloaded convert_hf_to_gguf.py script. These actions are performed to support model conversion pipelines and originate from a trusted repository.
  • [COMMAND_EXECUTION]: Multiple scripts execute local binaries such as llama-cli, llama-server, llama-quantize, and ollama using subprocess calls. These commands are used for their primary intended purposes: serving models, running inference, and managing model files.
  • [PROMPT_INJECTION]: The skill exposes an indirect prompt injection surface through scripts like scripts/llama_lora.sh and scripts/llama_bench.sh, which accept user-provided prompts as command-line arguments. This is standard behavior for LLM interface tools.
  • Ingestion points: Command-line arguments in scripts/llama_lora.sh, scripts/llama_bench.sh, and scripts/llama_serve.sh.
  • Boundary markers: None present.
  • Capability inventory: Execution of local inference binaries.
  • Sanitization: None present.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 2, 2026, 09:47 PM