NYC

hugging-face-paper-publisher

Fail

Audited by Gen Agent Trust Hub on Feb 15, 2026

Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • Indirect Prompt Injection (HIGH): The skill creates a vulnerability surface by ingesting untrusted data (titles, abstracts, and citations) from arXiv and writing that data into Model and Dataset README files on the Hugging Face Hub.
  • Ingestion points: The scripts/paper_manager.py script pulls data from external arXiv IDs (via the index and info commands).
  • Capability inventory: The link command modifies local model/dataset cards and uploads them back to the Hugging Face Hub, allowing external content to influence repository metadata.
  • Sanitization: No evidence of sanitization or escaping of external paper metadata is present in the provided templates, increasing the risk that a maliciously crafted arXiv paper could inject instructions or malicious markdown into a repository's documentation.
  • Unverifiable Dependencies & Execution (MEDIUM): The core logic of the skill resides in scripts/paper_manager.py, which is referenced throughout the documentation but not provided in the skill payload.
  • Evidence: Examples in example_usage.md and quick_reference.md show the AI agent is expected to execute this script with various parameters, including those that handle sensitive HF_TOKEN credentials.
  • Credentials Handling (LOW): The documentation recommends storing HF_TOKEN in .env files or environment variables. While standard practice, this highlights the skill's access to high-privilege write tokens for the Hugging Face Hub.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 15, 2026, 09:14 PM