bankr
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
- EXTERNAL_DOWNLOADS (MEDIUM): The skill instructs users to install the '@bankr/cli' package globally using 'npm' or 'bun'. Since this package and organization are not on the trusted list, it is considered an unverifiable dependency.
- CREDENTIALS_UNSAFE (MEDIUM): The documentation describes an 'LLM Gateway' (llm.bankr.bot) that proxies requests to major AI providers. It encourages users to configure their own API keys within this gateway, which allows the third-party service to potentially intercept sensitive LLM credentials.
- COMMAND_EXECUTION (MEDIUM): The 'Arbitrary Transaction' and 'Sign and Submit API' features allow an AI agent to generate and execute raw EVM calldata. This presents a high risk of fund loss if an attacker uses prompt injection to trick the agent into signing a malicious transaction payload.
- DATA_EXFILTRATION (MEDIUM): Users are instructed to store their API keys in cleartext in a local configuration file at '~/.bankr/config.json'. This predictable path makes the credentials a target for exfiltration by other malicious scripts or skills.
- INDIRECT_PROMPT_INJECTION (LOW): The skill defines several untrusted data ingestion points, including NFT marketplaces, prediction markets, and social sentiment analysis. When combined with the agent's high-privilege financial capabilities, this creates a significant attack surface for indirect prompt injection, although no specific malicious instructions were found in the static reference files.
Audit Metadata