llm-integration

Fail

Audited by Gen Agent Trust Hub on Mar 26, 2026

Risk Level: HIGHCOMMAND_EXECUTIONDATA_EXFILTRATIONREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [COMMAND_EXECUTION]: The file scripts/create-lora-config.md utilizes dynamic context injection via the !command`` syntax to execute shell commands such as grep, find, and date when the skill is processed.
  • [DATA_EXFILTRATION]: Within scripts/create-lora-config.md, a dynamic command (!grep -r "MODEL_PATH\|model_name" .env* ...) is used to search for configuration values within environment files (.env). Automatically reading environment files can lead to the exposure of sensitive configuration or secrets if they are present, as the results are embedded directly into the prompt context.
  • [REMOTE_CODE_EXECUTION]: The script scripts/dpo-training.py explicitly enables trust_remote_code=True in multiple calls to AutoModelForCausalLM.from_pretrained and AutoTokenizer.from_pretrained. This setting allows the library to execute arbitrary code contained within a model's repository on the Hugging Face Hub, posing a risk if the model source is untrusted.
  • [EXTERNAL_DOWNLOADS]: The documentation in rules/local-ollama-setup.md encourages the direct execution of a remote shell script via curl -fsSL https://ollama.ai/install.sh | sh. While targeting a well-known service (Ollama), this method of installation bypasses standard security checks of package managers.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 26, 2026, 10:46 AM