prompt-engineering-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 21, 2026

Risk Level: SAFE
Full Analysis
  • PROMPT_INJECTION (SAFE): The skill contains numerous prompt templates and examples of Chain-of-Thought reasoning. While these templates utilize variable interpolation (e.g., {text}, {user_query}), they do not contain malicious bypass instructions or attempts to extract system prompts. The patterns are consistent with standard prompt engineering practices.
  • DATA_EXFILTRATION (SAFE): No sensitive file access (e.g., SSH keys, environment files) or unauthorized data transmission was detected. The scripts handle data locally for evaluation purposes.
  • REMOTE_CODE_EXECUTION (SAFE): No patterns of downloading and executing remote scripts (e.g., curl | bash) were found. The included Python script uses a mock/local client for testing and does not execute arbitrary code from external sources.
  • EXTERNAL_DOWNLOADS (SAFE): The skill references standard, trustworthy libraries such as numpy and openai. No suspicious or unverified third-party dependencies were identified.
  • DYNAMIC_EXECUTION (SAFE): Although the skill discusses code generation and debugging, the provided implementation in scripts/optimize-prompt.py does not use unsafe functions like eval() or exec() on LLM-generated output.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 21, 2026, 02:23 PM