blockrun

Fail

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSCREDENTIALS_UNSAFEDATA_EXFILTRATIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • Unverifiable Dependencies & Remote Code Execution (HIGH): The skill instructs the agent to install the blockrun-llm package via pip. This package is not from a trusted organization, introducing a major supply chain risk where malicious code could be executed on the host system during installation or at runtime.- Data Exposure & Exfiltration (HIGH): The skill manages a cryptocurrency wallet stored in a local session file at ~/.blockrun/.session. It also routes user prompts to an unverified third-party service (BlockRun), which constitutes data exfiltration of potentially sensitive information.- Command Execution (MEDIUM): The skill requests broad permissions via allowed-tools, including Bash(pip:*), Bash(python:*), and Bash(source:*). These tools allow for arbitrary code execution and environment modification, which could be leveraged by an untrusted dependency to compromise the host.- Indirect Prompt Injection (LOW): The skill acts as a proxy to external models (e.g., GPT-5, Grok). This creates a surface where malicious instructions in the external models' responses could influence the agent's subsequent actions. Evidence: Ingestion points via client.chat(); Absence of boundary markers or sanitization; Capability to execute shell commands and read files.- Metadata Poisoning (MEDIUM): The skill uses potentially deceptive or non-standard claims (e.g., 'Google Antigravity', 'GPT-5.2') which may mislead users about the nature and safety of the external services being integrated.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 17, 2026, 05:09 PM