skills/eyadsibai/ltk/ml-engineering/Gen Agent Trust Hub

ml-engineering

Warn

Audited by Gen Agent Trust Hub on Feb 16, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [Dynamic Execution] (MEDIUM): The example code utilizes torch.load("model.pth"). PyTorch's default serialization uses the pickle module, which is known to be insecure and can execute arbitrary code during the unpickling process. This poses a risk if an attacker can provide a malicious model file. Evidence: SKILL.md Python snippet 'model = torch.load("model.pth")'.
  • [Indirect Prompt Injection] (MEDIUM): The skill describes architectures for model serving (FastAPI) and RAG (LangChain) that ingest external untrusted data without providing boundary markers or sanitization logic. This creates a surface where malicious instructions in input data or retrieved context could influence agent logic. 1. Ingestion points: FastAPI /predict endpoint (data: dict) and LangChain vectorstore context. 2. Boundary markers: Absent. 3. Capability inventory: Model inference and LLM-based retrieval/QA. 4. Sanitization: Absent.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 16, 2026, 12:22 AM