huggingface-transformers
Warn
Audited by Gen Agent Trust Hub on Mar 3, 2026
Risk Level: MEDIUMREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
- [REMOTE_CODE_EXECUTION]: The skill's code snippets utilize the
trust_remote_code=Trueparameter when loading models from the Hugging Face Hub. - Evidence: Found in
SKILL.mdwithin the loading examples for 'Mistral' and 'TheBloke' model variants. - Risk: This setting allows the
transformerslibrary to execute arbitrary Python code defined by the model's author within the repository. This is an intentional feature for custom architectures but can be exploited if a model repository is malicious or compromised. - [EXTERNAL_DOWNLOADS]: The skill is configured to download large model weights, tokenizers, and datasets from Hugging Face.
- Evidence: Frequent use of
.from_pretrained()andload_dataset()functions throughoutSKILL.mdtargeting various repositories. - Note: Although Hugging Face is a well-known service, users should verify the provenance of specific community-uploaded models before execution.
- [PROMPT_INJECTION]: The skill defines several patterns where untrusted data is ingested and processed by models without sanitization or boundary markers.
- Ingestion points:
messagesin the text generation pipeline,contextin the question-answering pipeline, and thepromptargument in theLocalLLM.generatemethod. - Boundary markers: Absent; user input is passed directly to the model templates.
- Capability inventory: The skill has the capability to write files via
save_pretrainedand perform network operations to download models. - Sanitization: Absent; no input validation or escaping is applied to the data before it is processed by the models.
Audit Metadata