prompt-engineering-patterns

Pass

Audited by Gen Agent Trust Hub on Mar 10, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill presents an attack surface for indirect prompt injection within its template systems and optimization logic.\n
  • Ingestion points: Untrusted data enters the context via TestCase.input in scripts/optimize-prompt.py and through variable interpolation in PromptTemplate.render and ConditionalTemplate.render in references/prompt-templates.md.\n
  • Boundary markers: The provided templates generally lack explicit delimiters or instructions for the model to ignore instructions embedded within data fields.\n
  • Capability inventory: The skill utilizes scripts/optimize-prompt.py to automate large-scale LLM interactions using a ThreadPoolExecutor.\n
  • Sanitization: There is no evidence of input validation, escaping, or filtering of external content before it is placed into prompt strings.\n- [COMMAND_EXECUTION]: The utility script scripts/optimize-prompt.py implements a PromptOptimizer class that manages concurrent LLM requests via a ThreadPoolExecutor to facilitate automated A/B testing and variation generation.\n- [SAFE]: The skill references several well-known third-party libraries for its primary functionality, including numpy, scikit-learn, scipy, sentence-transformers, and openai. These are industry-standard tools for machine learning and natural language processing tasks.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 10, 2026, 06:51 AM