tech-prompt-engineering

Pass

Audited by Gen Agent Trust Hub on Apr 14, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill documentation, particularly in 'references/injection-patterns.md', includes numerous examples of prompt injection attacks (e.g., 'Ignore previous instructions', 'Act as DAN'). These are clearly identified as educational material to help developers recognize and mitigate such attacks rather than instructions for the agent to follow.
  • [DATA_EXFILTRATION]: The skill discusses data exfiltration as a potential security failure and provides architectural defenses (like output validation) to prevent it. No actual exfiltration logic or attempts to access sensitive data (e.g., environment variables or SSH keys) were found.
  • [COMMAND_EXECUTION]: Python code snippets provided in the 'references' directory demonstrate tasks like JSON schema validation, invariant checking, and regex-based attack detection. These snippets use standard, non-malicious libraries and do not perform unauthorized system commands or unsafe process executions.
  • [SAFE]: The skill emphasizes the 'Iron Law' of treating user input as hostile and provides robust frameworks for regression testing and structural separation to ensure AI application security.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 14, 2026, 06:21 AM