fine-tuning

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION] (HIGH): The script scripts/finetune_lora.py invokes AutoModelForCausalLM.from_pretrained with trust_remote_code=True. This allows the execution of arbitrary Python code defined in the model's configuration or repository on the Hugging Face Hub, posing a severe risk if a model repository is compromised. Additionally, load_dataset in the same script uses the user-provided data_path, which can trigger remote code execution if it points to a malicious Hugging Face dataset script.
  • [PROMPT_INJECTION] (LOW): The skill possesses a vulnerability surface for indirect prompt injection through its dataset ingestion logic. 1. Ingestion points: The data_path parameter in scripts/finetune_lora.py and the DatasetPreparer class in SKILL.md. 2. Boundary markers: No explicit delimiters or instructions to ignore embedded commands are present in the provided templates. 3. Capability inventory: The skill performs model training, file writing for checkpoints, and network operations for downloading weights. 4. Sanitization: There is no evidence of sanitization or validation of the input instructions or data content before it is used for training.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 06:45 PM