aris-autonomous-ml-research

Fail

Audited by Gen Agent Trust Hub on Mar 16, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill's installation instructions direct the user to clone a repository from an untrusted GitHub account (wanshuiyin/Auto-claude-code-research-in-sleep). This source is not verified and is used to provide the skill's primary logic and executable tools.
  • [REMOTE_CODE_EXECUTION]: The skill requires downloading and running external software, including the @openai/codex npm package and a Python-based MCP server from the untrusted repository, which are then integrated into the agent's execution environment.
  • [DATA_EXFILTRATION]: The skill is designed to ingest sensitive private data from local sources like Zotero libraries and Obsidian vaults. Combined with the capability to make network requests to external webhooks (e.g., Feishu/Lark), this creates a risk where research ideas or private notes could be exfiltrated.
  • [PROMPT_INJECTION]: The skill processes untrusted content from research papers (arXiv, Scholar, and local PDFs) to generate research ideas and reports. It lack boundary markers or sanitization for this external input, making it vulnerable to indirect prompt injection where malicious instructions hidden in research papers could hijack the autonomous workflow.
  • [COMMAND_EXECUTION]: The skill executes high-risk shell commands including 'git clone', 'npm install', 'pip install', 'claude mcp add', and 'pdflatex', and it programmatically triggers the agent CLI using the Python subprocess module.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 16, 2026, 12:15 AM