moai-domain-ml

Pass

Audited by Gen Agent Trust Hub on Mar 1, 2026

Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: The skill demonstrates the use of joblib.load() and mlflow.pyfunc.load_model() in its implementation examples. These functions utilize deserialization techniques that can execute arbitrary code if a malicious model file is processed. While standard in ML workflows, this remains a significant attack vector if the source of the model is not strictly controlled.
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection as it ingests external data without sufficient sanitization. 1. Ingestion points: DataProcessor.load_data() in examples.md and PredictionRequest in SKILL.md. 2. Boundary markers: Absent from the provided prompt templates. 3. Capability inventory: Uses Bash, Write, Edit, WebFetch, and WebSearch tools. 4. Sanitization: Lacks input validation for features or CSV content.
  • [COMMAND_EXECUTION]: The skill explicitly allows and provides examples for using the Bash tool to set up environments and run training jobs, which could be exploited if the agent is compromised via injection.
  • [EXTERNAL_DOWNLOADS]: The skill instructions involve downloading numerous Python packages (e.g., torch, scikit-learn, mlflow) via pip, which relies on the security of external package registries.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 1, 2026, 01:07 AM