llm-test
Pass
Audited by Gen Agent Trust Hub on Apr 5, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill serves as a documentation guide for developers writing tests for LLM features. It specifies best practices for model selection in test fixtures and refers to a local markdown file for extended guidelines. No security risks such as prompt injection, data exfiltration, or remote code execution were detected.
Audit Metadata