fine-tuning
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION] (HIGH): The script
scripts/finetune_lora.pyinvokesAutoModelForCausalLM.from_pretrainedwithtrust_remote_code=True. This allows the execution of arbitrary Python code defined in the model's configuration or repository on the Hugging Face Hub, posing a severe risk if a model repository is compromised. Additionally,load_datasetin the same script uses the user-provideddata_path, which can trigger remote code execution if it points to a malicious Hugging Face dataset script. - [PROMPT_INJECTION] (LOW): The skill possesses a vulnerability surface for indirect prompt injection through its dataset ingestion logic. 1. Ingestion points: The
data_pathparameter inscripts/finetune_lora.pyand theDatasetPreparerclass inSKILL.md. 2. Boundary markers: No explicit delimiters or instructions to ignore embedded commands are present in the provided templates. 3. Capability inventory: The skill performs model training, file writing for checkpoints, and network operations for downloading weights. 4. Sanitization: There is no evidence of sanitization or validation of the input instructions or data content before it is used for training.
Recommendations
- AI detected serious security threats
Audit Metadata