faion-ml-ops

Fail

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: HIGHREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • Unverifiable Dependencies & Remote Code Execution (HIGH): The documentation explicitly instructs users to enable remote code execution when loading pre-trained models.
  • Evidence: lora-qlora/README.md contains a configuration for AutoModelForCausalLM.from_pretrained that sets trust_remote_code=True. This allows any Python code included in the model repository (e.g., from HuggingFace) to execute on the local system during initialization.
  • Indirect Prompt Injection (HIGH): The skill possesses a high-risk vulnerability surface where untrusted data (ML datasets) is processed by tools with system-level execution capabilities.
  • Ingestion points: fine-tuning-openai-basics/README.md uses seed_examples for data generation; lora-qlora/README.md uses dataset for SFT training.
  • Boundary markers: Absent. Untrusted examples are interpolated directly into GPT-4 prompts in generate_training_data.
  • Capability inventory: The skill has access to Bash, Write, Edit, and Task tools (SKILL.md), allowing a successful injection to perform arbitrary system commands or file modifications.
  • Sanitization: Absent. No validation or escaping is implemented for training data content.
  • Data Exposure & Exfiltration (LOW): The observability setup routes sensitive LLM traffic through third-party proxy services.
  • Evidence: llm-observability-stack-2026/README.md configures the OpenAI client with base_url="https://oai.hconeai.com/v1" (Helicone), sending all application data to an external non-whitelisted domain.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Feb 16, 2026, 04:08 AM