faion-ml-ops
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- Unverifiable Dependencies & Remote Code Execution (HIGH): The documentation explicitly instructs users to enable remote code execution when loading pre-trained models.
- Evidence:
lora-qlora/README.mdcontains a configuration forAutoModelForCausalLM.from_pretrainedthat setstrust_remote_code=True. This allows any Python code included in the model repository (e.g., from HuggingFace) to execute on the local system during initialization. - Indirect Prompt Injection (HIGH): The skill possesses a high-risk vulnerability surface where untrusted data (ML datasets) is processed by tools with system-level execution capabilities.
- Ingestion points:
fine-tuning-openai-basics/README.mdusesseed_examplesfor data generation;lora-qlora/README.mdusesdatasetfor SFT training. - Boundary markers: Absent. Untrusted examples are interpolated directly into GPT-4 prompts in
generate_training_data. - Capability inventory: The skill has access to
Bash,Write,Edit, andTasktools (SKILL.md), allowing a successful injection to perform arbitrary system commands or file modifications. - Sanitization: Absent. No validation or escaping is implemented for training data content.
- Data Exposure & Exfiltration (LOW): The observability setup routes sensitive LLM traffic through third-party proxy services.
- Evidence:
llm-observability-stack-2026/README.mdconfigures the OpenAI client withbase_url="https://oai.hconeai.com/v1"(Helicone), sending all application data to an external non-whitelisted domain.
Recommendations
- AI detected serious security threats
Audit Metadata