LLM
Pass
Audited by Gen Agent Trust Hub on Mar 16, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill provides standard documentation and integration code for an LLM SDK. It includes clear security guidance, such as restricting SDK usage to backend environments and protecting API keys.
- [INDIRECT_PROMPT_INJECTION]: The skill serves as a wrapper for LLM chat completions, which is a known surface for indirect prompt injection.
- Ingestion points: Untrusted user input is accepted through the message parameters in the SDK implementation and the Express.js API endpoint described in SKILL.md.
- Boundary markers: Examples demonstrate simple message passing without explicit delimiters or safety instructions, which is common for basic SDK documentation.
- Capability inventory: The skill provides text generation capabilities and documents CLI functionality for saving output to files.
- Sanitization: No prompt sanitization or filtering is implemented within the boilerplate, as the skill is intended to provide a raw interface for the underlying LLM.
Audit Metadata