autoresearch
Fail
Audited by Snyk on Mar 11, 2026
Risk Level: CRITICAL
Full Analysis
CRITICAL E005: Suspicious download URL detected in skill instructions.
- Suspicious download URL detected (high risk: 0.75). While many links point to legitimate resources (Karpathy's repo, Hugging Face dataset, tweets), the inclusion of a remote install script (astral.sh/uv/install.sh) invoked via curl|sh and several small/third-party GitHub forks that you would be asked to clone/execute raises moderate-to-high risk because piping remote shell scripts and running unreviewed repository code can distribute malware.
MEDIUM W011: Third-party content exposure detected (indirect prompt injection risk).
- Third-party content exposure detected (high risk: 0.90). The skill's setup explicitly clones public GitHub repos (karpathy/autoresearch or miolini/autoresearch-macos) and runs prepare.py to download public datasets (e.g., FineWeb‑Edu or TinyStories from HuggingFace), and the AI agent is expected to read and modify the repo's train.py (files fetched from those third‑party sources) as part of its autonomous loop, exposing it to untrusted, user‑generated content that can influence actions.
MEDIUM W012: Unverifiable external dependency detected (runtime URL that controls agent).
- Potentially malicious external URL detected (high risk: 0.90). The skill explicitly runs a runtime shell fetch-and-execute command (curl -LsSf https://astral.sh/uv/install.sh | sh) and requires cloning remote repositories (https://github.com/miolini/autoresearch-macos.git or https://github.com/karpathy/autoresearch.git) whose code is then executed (uv run prepare.py / uv run train.py), so these URLs deliver and execute remote code that the skill depends on.
Audit Metadata