deep-research

Fail

Audited by Gen Agent Trust Hub on Apr 21, 2026

Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [CREDENTIALS_UNSAFE]: The skill documentation and its api-reference.md explicitly direct the agent to retrieve an API key from the local file path /Users/lingzhi/Code/keys.md. Providing specific local paths for secret retrieval is a security risk as it exposes credentials to the agent's context.
  • [COMMAND_EXECUTION]: The skill's workflow depends on executing multiple Python scripts via the shell, including an external tool located at /Users/lingzhi/Code/documents/tool/paper_finder/paper_finder.py. This grants the skill broad capability to execute code and interact with the host system.
  • [EXTERNAL_DOWNLOADS]: The skill is designed to automatically download PDF documents from external domains such as arxiv.org and api.semanticscholar.org. While functional for its stated purpose, the automated retrieval and processing of external files introduce an attack surface for malicious or untrusted content.
  • [PROMPT_INJECTION]: The SKILL.md file uses forceful procedural instructions such as 'CRITICAL' and 'MUST' to enforce a strict sequential workflow. These patterns are designed to constrain agent autonomy and ensure obedience to a specific set of operational rules.
  • [DATA_EXPOSURE]: The skill processes paper metadata and extracts text from external PDFs for synthesis. The ingestion of this untrusted external data represents a potential vector for indirect prompt injection, where content within the papers could influence the agent's subsequent analysis or report generation.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 21, 2026, 07:28 AM