biomni
Fail
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
- [EXTERNAL_DOWNLOADS] (HIGH): The installation process requires cloning a repository from GitHub and executing a local shell script (
bash setup.sh). Executing downloaded scripts without prior review is a high-risk operation that can lead to system compromise. - [REMOTE_CODE_EXECUTION] (HIGH): The primary function of the agent is to generate and execute code dynamically via the
agent.go()method. The documentation explicitly states that generated code runs with 'full system privileges', creating a massive attack surface if the LLM is manipulated. - [COMMAND_EXECUTION] (HIGH): Setup instructions involve shell commands for environment configuration and package management (
bash setup.sh,pip install), which can be used to execute arbitrary code during the installation phase. - [DATA_EXFILTRATION] (MEDIUM): The skill is designed to handle sensitive biomedical, clinical, and genomic data. Given that the agent also requires network access for LLM APIs and database downloads, there is a potential path for data exposure or exfiltration if the agent is prompted to send data externally.
- [PROMPT_INJECTION] (LOW): As an autonomous agent that processes external files (GWAS summary stats, RNA-seq data), it is vulnerable to Indirect Prompt Injection (Category 8). An attacker could embed instructions within biological datasets to subvert the agent's logic during analysis.
Recommendations
- AI detected serious security threats
Audit Metadata