ai-ml-development
Pass
Audited by Gen Agent Trust Hub on Feb 19, 2026
Risk Level: SAFEPROMPT_INJECTIONREMOTE_CODE_EXECUTION
Full Analysis
- Indirect Prompt Injection (LOW): The skill contains functions that interpolate untrusted user data into LLM prompts without using boundary markers or sanitization logic, creating a surface for indirect prompt injection attacks.
- Ingestion points: The
textparameter in theextract_entitiesfunction and the LangChain summarization template (Summarize: {text}). - Boundary markers: Absent; user input is directly concatenated or interpolated.
- Capability inventory: The agent has network access to external LLM providers (OpenAI, Anthropic).
- Sanitization: No escaping or validation is performed on the input text before inclusion in the prompt.
- Dynamic Execution (LOW): In the FastAPI model serving section, the code uses
torch.load("model.pt"). In PyTorch, this function uses Python'spicklemodule by default, which can execute arbitrary code if the model file is replaced with a malicious artifact. While this is standard practice in ML development, it remains a security risk if the storage environment is compromised. (Severity downgraded from MEDIUM as it is intrinsic to the primary ML development purpose).
Audit Metadata