blockrun
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTIONCREDENTIALS_UNSAFE
Full Analysis
- Indirect Prompt Injection (HIGH): The skill is designed to ingest live, attacker-controlled data from X/Twitter and the web via the xAI Live Search functionality.
- Ingestion points: Untrusted data enters the agent context via
client.chat(..., search=True)as described inSKILL.md. - Boundary markers: There are no delimited boundaries or explicit instructions provided to the external models to ignore embedded instructions within search results.
- Capability inventory: The skill has high-privilege access to
Bash(python:*),Bash(python3:*),Bash(pip:*), andBash(source:*). - Sanitization: No sanitization or filtering of the ingested external content is mentioned. An attacker could post content on X/Twitter that, when retrieved, tricks the agent into executing malicious bash commands.
- Unverifiable Dependencies (HIGH): The skill requires
pip install blockrun-llm. This package is not from a recognized trusted source (e.g., Anthropic, Google, OpenAI). Installing and upgrading untrusted packages viapipis a primary vector for supply chain attacks. - Excessive Permissions (HIGH): The
allowed-toolsconfiguration is highly permissive, specifically granting access toBash(pip:*)andBash(source:*). This allows the skill to install any arbitrary software or execute shell scripts from computed or remote paths, bypassing standard safety constraints. - Insecure Credential Handling (MEDIUM): The skill stores sensitive wallet session data in
$HOME/.blockrun/.session. This file contains the credentials/state for on-chain USDC payments. Unauthorized access to this file via local file read vulnerabilities or prompt injection could lead to the theft of user funds.
Recommendations
- AI detected serious security threats
Audit Metadata