latchbio-integration
Fail
Audited by Gen Agent Trust Hub on Feb 15, 2026
Risk Level: HIGHPROMPT_INJECTIONREMOTE_CODE_EXECUTIONCOMMAND_EXECUTION
Full Analysis
- [Indirect Prompt Injection] (HIGH): The skill exposes a large attack surface where untrusted data can influence the generation of executable workflows. \n
- Ingestion points: Data enters the context via
LatchFile,LatchDir, and Registry components (Project.list,Table.get,Record.list) as described inreferences/data-management.md. \n - Boundary markers: Absent. There are no delimiters or instructions provided to help the agent distinguish between bioinformatics data and malicious instructions embedded in that data. \n
- Capability inventory: The agent can execute
latch registerandlatch execute, which build, deploy, and run containers and Python scripts. \n - Sanitization: Absent. No logic is provided to validate or sanitize external metadata before it is used in workflow definitions. \n- [Unverifiable Dependencies & Remote Code Execution] (HIGH): The skill relies on the installation of the
latchSDK from PyPI. While a legitimate tool for the Latch platform, it is not within the pre-defined trusted organizations list. Furthermore, thelatch registercommand performs runtime containerization and serialization of local code, which is a high-risk capability if the code being registered is generated by an AI influenced by untrusted inputs. \n- [Command Execution] (MEDIUM): The skill instructs the agent to perform sensitive CLI operations, includinglatch login(authentication),latch init(file system modification), andlatch register(network and build operations), which provide the agent with significant control over the local environment and the user's Latch account.
Recommendations
- AI detected serious security threats
Audit Metadata