fine-tuning-serving-openpi
Warn
Audited by Gen Agent Trust Hub on Mar 19, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS]: Clones the OpenPI source code and its submodules from the official Physical Intelligence GitHub repository (https://github.com/Physical-Intelligence/openpi.git).
- [EXTERNAL_DOWNLOADS]: Retrieves pre-trained model checkpoints and training assets from a Google Cloud Storage bucket (gs://openpi-assets/).
- [COMMAND_EXECUTION]: Performs manual monkey-patching of the 'transformers' library by copying local files directly into the virtual environment's library directory (e.g., .venv/lib/python3.11/site-packages/transformers/). This modifies the behavior of an installed dependency at the file-system level.
- [PROMPT_INJECTION]: The skill processes untrusted natural language input via the 'prompt' key in robot observations, which is an attack surface for indirect prompt injection.
- Ingestion points: Found in SKILL.md and references/remote-client-pattern.md where user-provided strings are passed to the 'infer' method.
- Boundary markers: Absent. No delimiters or 'ignore' instructions are used to separate the prompt from the model's internal logic.
- Capability inventory: The skill executes training scripts (train.py) and operates a WebSocket-based policy server (serve_policy.py).
- Sanitization: No evidence of validation or sanitization for the input prompts before they are processed by the model.
Audit Metadata