openbio
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- REMOTE_CODE_EXECUTION (HIGH): The skill instructs the agent to install and update itself using
bunx skills addfrom the GitHub repositoryopenbio-ai/skills. Becausebunxfetches and executes packages, andopenbio-aiis not a Trusted External Source, this allows for arbitrary code execution from an untrusted third party. - EXTERNAL_DOWNLOADS (HIGH): The installation instructions reference an untrusted external repository (
https://github.com/openbio-ai/skills). Per the analysis rules, downloading and executing code from non-trusted sources is a high-severity finding. - PROMPT_INJECTION (HIGH): The skill possesses a large attack surface for Indirect Prompt Injection (Category 8). It is designed to fetch and process external, untrusted content from PubMed, bioRxiv, and genomic databases. If a processed record contains malicious instructions, the agent—which is already configured with command-execution capabilities—could be coerced into performing unauthorized actions. Mandatory Evidence: Ingestion points found in
literature.md,genomics.md, andblast.md; Capability inventory includesbunxandcurl(SKILL.md). - COMMAND_EXECUTION (MEDIUM): The documentation requires the agent to execute multiple shell commands (
curl,bunx,export) to interact with the API and manage local state. This provides the necessary environment for technical exploitation if an injection occurs. - DATA_EXFILTRATION (MEDIUM): The skill makes network requests to
api.openbio.tech. While this is the intended API, the domain is not whitelisted. Several tools (e.g.,submit_boltz_predictioninboltz.md) accept localinput_file_pathparameters. If an agent is manipulated into reading sensitive files (like SSH keys) and passing their contents to the API via these tools, exfiltration occurs.
Recommendations
- AI detected serious security threats
Audit Metadata