valohai-migrate-parameters

Pass

Audited by Gen Agent Trust Hub on Mar 10, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill provides instructions for structural refactoring of machine learning code to improve configurability. It encourages the use of standard Python libraries such as argparse and incorporates security best practices like yaml.safe_load() for parsing configuration files.
  • [EXTERNAL_DOWNLOADS]: Examples provided in the documentation include standard package management commands (pip install) within configuration files (valohai.yaml). These are routine operations within the Valohai platform and the machine learning development lifecycle, targeting established package registries.
  • [PROMPT_INJECTION]: The skill involves processing user-provided ML source code to identify hardcoded values. 1. Ingestion points: User-provided Python training scripts and ML code. 2. Boundary markers: Absent. 3. Capability inventory: Capability to refactor local Python scripts and generate/update valohai.yaml configuration files. 4. Sanitization: Instructions promote the use of standard argparse patterns and yaml.safe_load(). The risk of indirect prompt injection is minimal as the skill's logic is focused on technical migration and structural refactoring rather than executing content from the processed code as instructions.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 10, 2026, 09:49 AM