llm-integration
Fail
Audited by Gen Agent Trust Hub on Mar 26, 2026
Risk Level: HIGHCOMMAND_EXECUTIONDATA_EXFILTRATIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- [COMMAND_EXECUTION]: The file
scripts/create-lora-config.mdutilizes dynamic context injection via the!command`` syntax to execute shell commands such asgrep,find, anddatewhen the skill is processed. - [DATA_EXFILTRATION]: Within
scripts/create-lora-config.md, a dynamic command (!grep -r "MODEL_PATH\|model_name" .env* ...) is used to search for configuration values within environment files (.env). Automatically reading environment files can lead to the exposure of sensitive configuration or secrets if they are present, as the results are embedded directly into the prompt context. - [REMOTE_CODE_EXECUTION]: The script
scripts/dpo-training.pyexplicitly enablestrust_remote_code=Truein multiple calls toAutoModelForCausalLM.from_pretrainedandAutoTokenizer.from_pretrained. This setting allows the library to execute arbitrary code contained within a model's repository on the Hugging Face Hub, posing a risk if the model source is untrusted. - [EXTERNAL_DOWNLOADS]: The documentation in
rules/local-ollama-setup.mdencourages the direct execution of a remote shell script viacurl -fsSL https://ollama.ai/install.sh | sh. While targeting a well-known service (Ollama), this method of installation bypasses standard security checks of package managers.
Recommendations
- AI detected serious security threats
Audit Metadata