tool-calling
Pass
Audited by Gen Agent Trust Hub on Mar 1, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: No malicious patterns or security risks were identified in the skill. The instructions prioritize security-first design principles for AI integrations.
- [PROMPT_INJECTION]: The skill provides specific mitigation strategies and adversarial test cases to help assistants identify and reject prompt injection and jailbreak attempts.
- [DATA_EXFILTRATION]: Includes logic and templates for masking sensitive data and stripping credentials from API responses to prevent accidental exposure of PII or secrets.
- [REMOTE_CODE_EXECUTION]: All provided code examples are educational templates that emphasize server-side execution, circuit breakers, and strict argument validation.
Audit Metadata