hugging-face-paper-publisher
Fail
Audited by Gen Agent Trust Hub on Feb 15, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- Indirect Prompt Injection (HIGH): The skill creates a vulnerability surface by ingesting untrusted data (titles, abstracts, and citations) from arXiv and writing that data into Model and Dataset README files on the Hugging Face Hub.
- Ingestion points: The
scripts/paper_manager.pyscript pulls data from external arXiv IDs (via theindexandinfocommands). - Capability inventory: The
linkcommand modifies local model/dataset cards and uploads them back to the Hugging Face Hub, allowing external content to influence repository metadata. - Sanitization: No evidence of sanitization or escaping of external paper metadata is present in the provided templates, increasing the risk that a maliciously crafted arXiv paper could inject instructions or malicious markdown into a repository's documentation.
- Unverifiable Dependencies & Execution (MEDIUM): The core logic of the skill resides in
scripts/paper_manager.py, which is referenced throughout the documentation but not provided in the skill payload. - Evidence: Examples in
example_usage.mdandquick_reference.mdshow the AI agent is expected to execute this script with various parameters, including those that handle sensitiveHF_TOKENcredentials. - Credentials Handling (LOW): The documentation recommends storing
HF_TOKENin.envfiles or environment variables. While standard practice, this highlights the skill's access to high-privilege write tokens for the Hugging Face Hub.
Recommendations
- AI detected serious security threats
Audit Metadata