prompt-engineering-patterns

Pass

Audited by Gen Agent Trust Hub on Mar 2, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill contains logic to interpolate data into system and user prompts, which establishes a surface for indirect prompt injection.\n
  • Ingestion points: External data is processed through the render methods in PromptTemplate and ConditionalTemplate (defined in references/prompt-templates.md) and within the evaluate_prompt method in scripts/optimize-prompt.py.\n
  • Boundary markers: Although the templates use descriptive sections (e.g., Context:, Input:), they do not employ robust isolation techniques or specific instructions to the model to ignore potential directives embedded within the data.\n
  • Capability inventory: The skill interacts with external LLM APIs via the openai library and performs local file writing for result persistence (optimization_results.json).\n
  • Sanitization: There is no evidence of variable sanitization, filtering, or delimiter-escaping to prevent malicious input from overriding intended prompt logic.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 2, 2026, 06:17 AM