llm-security
Pass
Audited by Gen Agent Trust Hub on Feb 26, 2026
Risk Level: SAFE
Full Analysis
- [PROMPT_INJECTION]: The skill documentation includes extensive lists of prompt injection and jailbreak payloads (e.g., 'Ignore previous instructions', 'DAN mode'). These are clearly categorized as test vectors for security assessments of other AI systems and do not target the agent's internal safety logic.
- [COMMAND_EXECUTION]: The file lists various system commands (e.g., 'ls -la', 'cat /etc/passwd') as examples for testing 'Excessive Agency' in AI agents. These are static text examples for researcher reference rather than executable instructions within the skill itself.
- [EXTERNAL_DOWNLOADS]: The provided Python automation scripts use the 'requests' library to interact with external API endpoints. These endpoints are intended to be configured by the user for testing their own applications, and no hardcoded or suspicious destination URLs were found.
Audit Metadata