llm-inference
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [Indirect Prompt Injection] (LOW): The skill processes arbitrary string data via LLM inference which creates a surface for indirect prompt injection. Evidence: (1) Ingestion points: Any string processed by the listed models. (2) Boundary markers: Absent from documentation. (3) Capability inventory: Full LLM inference capabilities. (4) Sanitization: No sanitization or escaping protocols mentioned.
- [External Downloads] (SAFE): Mentions internal project files for functional logic but contains no instructions for downloading or executing code from external, untrusted sources.
Audit Metadata