NYC
skills/bankrbot/openclaw-skills/bankr/Gen Agent Trust Hub

bankr

Warn

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONCREDENTIALS_UNSAFEDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [EXTERNAL_DOWNLOADS] (MEDIUM): The documentation (error-handling.md) instructs the installation of the @bankr/cli package via npm or bun. This package is not hosted by a trusted organization listed in the security guidelines, posing a potential supply chain risk.
  • [COMMAND_EXECUTION] (MEDIUM): The skill facilitates the execution of raw shell commands and blockchain transactions. Specifically, the 'submit' endpoint in the Sign and Submit API (sign-submit-api.md) is documented to execute immediately without confirmation prompts, which is a high-risk capability for an AI agent.
  • [CREDENTIALS_UNSAFE] (LOW): The skill documentation describes storing sensitive API keys (BANKR_API_KEY, BANKR_LLM_KEY) in environment variables and local configuration files (~/.bankr/config.json).
  • [DATA_EXFILTRATION] (LOW): The skill communicates with non-whitelisted external domains (api.bankr.bot, llm.bankr.bot). While these are the service's own endpoints, the transmission of transaction data and authentication tokens to these sites constitutes a data exposure surface.
  • [PROMPT_INJECTION] (LOW): The skill exhibits a large surface for indirect prompt injection (Category 8). The agent ingests untrusted data from market research, NFT metadata, and prediction markets (market-research.md, nft-operations.md, polymarket.md) and possesses the high-privilege capabilities required to act on that data through 'Arbitrary Transactions' and 'Transfers.'
  • [METADATA_POISONING] (MEDIUM): The LLM Gateway documentation (llm-gateway.md) lists several fictitious or non-existent AI models such as 'GPT-5.2', 'Claude-Sonnet-4.5', and 'Gemini-3-Pro'. This deceptive metadata may lead users or agents to believe the service possesses capabilities it does not actually have.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 17, 2026, 08:01 PM