securing-ai-development
Pass
Audited by Gen Agent Trust Hub on Mar 6, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill consists entirely of Markdown files (SKILL.md, INSTRUCTIONS.md, and reference documents) providing organizational guidelines and security strategies. No executable Python, Node.js, or Shell scripts are present.
- [SAFE]: No prompt injection or behavior override attempts were found. The instructions are focused on establishing security frameworks and do not attempt to bypass LLM safety filters.
- [SAFE]: No data exposure or exfiltration risks were detected. The documents mention sensitive file paths and credentials only as examples of what security policies should prohibit or monitor.
- [SAFE]: No obfuscation, zero-width characters, or hidden payloads were found in any of the analyzed files.
- [SAFE]: The skill discusses security threats such as prompt injection, model poisoning, and data leakage in an educational context, providing defenses rather than implementing attacks.
Audit Metadata