llama-cpp
Pass
Audited by Gen Agent Trust Hub on Mar 2, 2026
Risk Level: SAFEEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: The script
scripts/convert_lora_to_gguf.pyclones the official llama.cpp repository from GitHub (https://github.com/ggerganov/llama.cpp.git) to obtain necessary conversion scripts. This is documented as an intended feature for handling HuggingFace LoRA adapters and uses a well-known, trusted source. - [REMOTE_CODE_EXECUTION]: The skill performs an automated installation of the
ggufPython package from the cloned llama.cpp source and executes the downloadedconvert_hf_to_gguf.pyscript. These actions are performed to support model conversion pipelines and originate from a trusted repository. - [COMMAND_EXECUTION]: Multiple scripts execute local binaries such as
llama-cli,llama-server,llama-quantize, andollamausing subprocess calls. These commands are used for their primary intended purposes: serving models, running inference, and managing model files. - [PROMPT_INJECTION]: The skill exposes an indirect prompt injection surface through scripts like
scripts/llama_lora.shandscripts/llama_bench.sh, which accept user-provided prompts as command-line arguments. This is standard behavior for LLM interface tools. - Ingestion points: Command-line arguments in
scripts/llama_lora.sh,scripts/llama_bench.sh, andscripts/llama_serve.sh. - Boundary markers: None present.
- Capability inventory: Execution of local inference binaries.
- Sanitization: None present.
Audit Metadata