blockrun
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (MEDIUM): The skill requires the installation of the
blockrun-llmpackage viapip. This package is not from a trusted source, and its contents are not verified, posing a significant supply chain risk. - [COMMAND_EXECUTION] (MEDIUM): The skill requests permissions for
Bash(pip:*),Bash(python:*), andBash(source:*). These tools allow the skill to install arbitrary software, execute Python code, and modify the shell environment, providing a large attack surface. - [INDIRECT_PROMPT_INJECTION] (HIGH):
- Ingestion points: The skill retrieves live data from X (Twitter) and the general web via the
xai/grok-3search functionality. - Boundary markers: None. The documentation shows external data being fed directly into the agent's context without any delimiters or 'ignore' instructions.
- Capability inventory: The agent possesses
BashandReadcapabilities, meaning it can execute system commands or read local files. - Sanitization: Absent. Malicious instructions hidden in a tweet or webpage could be interpreted as commands by the agent, leading to unauthorized actions like file exfiltration or system modification.
- [METADATA_POISONING] (MEDIUM): The documentation lists models that do not currently exist, such as 'GPT-5.2' and 'Nano Banana'. Providing false information about capabilities and models is a deceptive practice that undermines the safety profile of the skill.
- [DATA_EXPOSURE] (MEDIUM): The skill creates and manages a cryptocurrency wallet at
$HOME/.blockrun/.session. This is a high-value target for exfiltration, and because the agent has network access through the SDK, this sensitive data is at risk if the agent is compromised via prompt injection.
Recommendations
- AI detected serious security threats
Audit Metadata