agentic-standards

Pass

Audited by Gen Agent Trust Hub on Apr 8, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: This skill provides architectural patterns and behavioral rules for building secure AI agents. It does not contain executable code, malicious scripts, or hidden commands.
  • [PROMPT_INJECTION]: The prompt-defense.md file contains defensive instructions to help agents recognize and flag injection attempts. The static analysis hit on 'Ignore previous instructions' is a false positive, as the text is used to educate the agent on identifying malicious patterns in untrusted tool output rather than as an attempt to override the agent's instructions.
  • [DATA_EXFILTRATION]: Guidelines within the skill explicitly instruct the agent to avoid leaking credentials, API keys, or personal data. It enforces a rule to treat all tool outputs as untrusted data and flags sensitive information rather than using it directly.
  • [COMMAND_EXECUTION]: The permission-pipeline.md and safety-and-reversibility.md files describe a risk-based framework for executing shell commands. This framework emphasizes user confirmation for high-risk or destructive actions (e.g., git push --force, file deletions) and advocates for static analysis of commands to prevent shell injection.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 8, 2026, 04:01 AM