hugging-face-model-trainer
Audited by Socket on Mar 18, 2026
1 alert found:
AnomalyThis script is a legitimate automation for converting and uploading LoRA-merged Hugging Face models to GGUF and quantized formats. It does not itself contain explicit malicious payloads, hidden backdoors, or obfuscated code. However it performs multiple high-risk supply-chain and execution actions: it loads remote model/tokenizer code with trust_remote_code=True, clones and executes scripts from an external GitHub repo, installs packages and builds binaries, and uploads model artifacts to Hugging Face. These behaviors create significant supply-chain and exfiltration risk if any of the external repositories or credentials are compromised or if it is run in an environment with sensitive data or shared /tmp. I assess low probability that this script is intentionally malicious, but the security risk is moderate-to-high due to execution of untrusted code and network artifact uploads — it should only be run in a controlled, isolated environment after auditing the external repositories and ensuring tokens/credentials are safe.