hugging-face-model-trainer

Fail

Audited by Socket on Feb 16, 2026

2 alerts found:

MalwareAnomaly
MalwareHIGH
SKILL.md

[Skill Scanner] Installation of third-party script detected All findings: [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] [HIGH] command_injection: Backtick command substitution detected (CI003) [AITech 9.1.4] The analyzed fragment describes a coherent, purpose-aligned workflow for TRL-based Hugging Face Jobs training with appropriate mechanisms for authentication, monitoring, and model persistence. While the credential-handling pattern is normal for cloud-based training, ensure best practices for secret sanitization and access control. Overall, the approach is benign and aligns with expected secure training orchestration. LLM verification: No explicit malware or obfuscated malicious code was found in the provided documentation. The bigger risk is operational and supply-chain: mandatory inline submission via hf_jobs(), making HF_TOKEN widely available with write permissions, required inclusion of Trackio without sanitization guidance, and unpinned dependency installation. These practices materially increase the chance of credential exposure or data leakage if templates, agent tooling, or dependency sources are compromised. Recommen

Confidence: 95%Severity: 90%
AnomalyLOW
scripts/convert_to_gguf.py

This script is a legitimate automation for converting and uploading LoRA-merged Hugging Face models to GGUF and quantized formats. It does not itself contain explicit malicious payloads, hidden backdoors, or obfuscated code. However it performs multiple high-risk supply-chain and execution actions: it loads remote model/tokenizer code with trust_remote_code=True, clones and executes scripts from an external GitHub repo, installs packages and builds binaries, and uploads model artifacts to Hugging Face. These behaviors create significant supply-chain and exfiltration risk if any of the external repositories or credentials are compromised or if it is run in an environment with sensitive data or shared /tmp. I assess low probability that this script is intentionally malicious, but the security risk is moderate-to-high due to execution of untrusted code and network artifact uploads — it should only be run in a controlled, isolated environment after auditing the external repositories and ensuring tokens/credentials are safe.

Confidence: 90%Severity: 60%
Audit Metadata
Analyzed At
Feb 16, 2026, 01:38 AM
Package URL
pkg:socket/skills-sh/patchy631%2Fai-engineering-hub%2Fhugging-face-model-trainer%2F@370a6eeb77610c292e6a73f9d2fcbd0772b60f09