prompt-engineering-patterns

Pass

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADSCOMMAND_EXECUTION
Full Analysis
  • [Prompt Injection] (LOW): The skill provides multiple patterns and templates that interpolate untrusted data into LLM prompts, creating an attack surface for indirect prompt injection.
  • Ingestion points: Variables like {query}, {text}, and {input_data} in assets/prompt-template-library.md and the test suite in scripts/optimize-prompt.py.
  • Boundary markers: Most templates lack explicit delimiters or instructions to ignore embedded commands, relying instead on direct task instructions.
  • Capability inventory: The skill is limited to local metric calculation and file output of results; it lacks network exfiltration or shell execution capabilities.
  • Sanitization: No sanitization or escaping of interpolated variables is implemented in the provided scripts or templates.
  • [External Downloads] (LOW): The optimization script (scripts/optimize-prompt.py) depends on the numpy package. While numpy is a standard library for data science, it is an external dependency from a non-whitelisted source.
  • [Command Execution] (SAFE): The provided Python script performs local logic for metric tracking and prompt formatting. It does not utilize unsafe subprocess calls or system-level command execution.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 17, 2026, 06:44 PM