crawl
Audited by Gen Agent Trust Hub on Feb 13, 2026
================================================================================
🔴 VERDICT: CRITICAL
This skill is critically vulnerable to shell injection. An attacker can craft a malicious JSON input that, when passed to the crawl.sh script, will execute arbitrary commands on the system running the skill. This allows for complete compromise of the environment. Additionally, the skill relies on an external API (Tavily) and processes arbitrary web content, which could pose an indirect prompt injection risk if the output is fed into an LLM.
Total Findings: 3
🔴 CRITICAL Findings: • Shell Injection Vulnerability
- Line 48: The
$JSON_INPUTvariable, taken directly from the first command-line argument, is passed to thecurlcommand using double quotes (--data "$JSON_INPUT"). This creates a classic shell injection vulnerability. If theJSON_INPUTcontains a double quote (") followed by shell metacharacters (e.g.,;,|,&,$(...)), these metacharacters will be interpreted by the shell, allowing an attacker to execute arbitrary commands. For example, a JSON input like{"url": "https://example.com", "foo": "bar\" ; rm -rf /tmp ; echo \"baz"}would executerm -rf /tmp.
🔵 LOW Findings: • External Dependency on Tavily API
- Line 49: The skill relies on the external
https://api.tavily.com/crawlendpoint for its core functionality. While Tavily is a legitimate service, any external API dependency introduces a reliance on a third-party service's availability and security. Users should be aware that their data (URLs to crawl, instructions) is sent to Tavily.
ℹ️ TRUSTED SOURCE References: • Indirect Prompt Injection Risk (User Responsibility)
- Line 106: The skill is designed to crawl web content, which can then be used "for agentic use (feeding results into context)". If the crawled web content contains malicious instructions or data, and this content is subsequently fed into an LLM, it could lead to indirect prompt injection against the LLM. This is a risk inherent to processing untrusted external content and feeding it to an LLM, rather than a direct vulnerability in the skill itself.
================================================================================
- AI detected serious security threats