huggingface-transformers

Warn

Audited by Gen Agent Trust Hub on Mar 3, 2026

Risk Level: MEDIUMREMOTE_CODE_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: The skill's code snippets utilize the trust_remote_code=True parameter when loading models from the Hugging Face Hub.
  • Evidence: Found in SKILL.md within the loading examples for 'Mistral' and 'TheBloke' model variants.
  • Risk: This setting allows the transformers library to execute arbitrary Python code defined by the model's author within the repository. This is an intentional feature for custom architectures but can be exploited if a model repository is malicious or compromised.
  • [EXTERNAL_DOWNLOADS]: The skill is configured to download large model weights, tokenizers, and datasets from Hugging Face.
  • Evidence: Frequent use of .from_pretrained() and load_dataset() functions throughout SKILL.md targeting various repositories.
  • Note: Although Hugging Face is a well-known service, users should verify the provenance of specific community-uploaded models before execution.
  • [PROMPT_INJECTION]: The skill defines several patterns where untrusted data is ingested and processed by models without sanitization or boundary markers.
  • Ingestion points: messages in the text generation pipeline, context in the question-answering pipeline, and the prompt argument in the LocalLLM.generate method.
  • Boundary markers: Absent; user input is passed directly to the model templates.
  • Capability inventory: The skill has the capability to write files via save_pretrained and perform network operations to download models.
  • Sanitization: Absent; no input validation or escaping is applied to the data before it is processed by the models.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 3, 2026, 12:51 PM