latchbio-integration

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHREMOTE_CODE_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [Indirect Prompt Injection] (HIGH): The skill defines patterns for processing untrusted external data which can influence agent behavior or workflow execution.
  • Ingestion points: LatchFile and LatchDir in references/data-management.md ingest external cloud storage files. Record.list and Record.get ingest metadata from the Latch Registry.
  • Boundary markers: Absent. The code examples show direct interpolation of data (e.g., sample.values['fastq_file']) into processing logic without delimiters.
  • Capability inventory: The skill enables custom_task execution with up to 96 CPUs and 768GB RAM, and provides a get_secret('api_key') function for credential access.
  • Sanitization: No evidence of input validation, escaping, or schema enforcement for data ingested from files or registry records.
  • [Remote Code Execution] (HIGH): The latch register command (documented in references/workflow-creation.md) builds Docker containers and serializes Python code for execution on remote Latch infrastructure. This allows for arbitrary code execution on cloud resources.
  • [Data Exfiltration] (MEDIUM): The latch.functions.get_secret utility in references/data-management.md allows workflows to retrieve sensitive credentials. Combined with the LatchFile upload capability, this creates a path for secret exfiltration to attacker-controlled storage if the workflow logic is compromised via indirect injection.
  • [External Downloads] (LOW): Documentation in references/workflow-creation.md instructs the user to install the latch package from PyPI (python3 -m pip install latch). While standard, the source latchbio is not in the trusted-scope-list.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 08:50 AM