prompt-engineering-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 19, 2026

Risk Level: SAFE
Full Analysis
  • [Indirect Prompt Injection] (SAFE): The skill provides templates that interpolate external data (like RAG context or user queries) into prompts. While this is the intended purpose, it creates an ingestion surface for untrusted data. Delimiters are used in templates, but developers should implement additional sanitization in production environments.\n
  • Ingestion points: scripts/optimize-prompt.py (test case inputs), SKILL.md (SQL and RAG templates).\n
  • Boundary markers: Delimiters like Q:, A:, and Context: are present.\n
  • Capability inventory: LLM completions and local file writing for results.\n
  • Sanitization: No explicit sanitization or escaping of interpolated variables.\n- [Data Exposure & Exfiltration] (SAFE): The skill documentation and scripts involve interaction with LLM APIs (e.g., openai.ChatCompletion). These operations are intrinsic to the skill's purpose and do not involve unauthorized exfiltration of sensitive local data.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 19, 2026, 04:11 AM