prompt-leverage

Pass

Audited by Gen Agent Trust Hub on Apr 20, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The script scripts/augment_prompt.py is vulnerable to indirect prompt injection.
  • Ingestion points: Raw user input is ingested via the prompt command-line argument in scripts/augment_prompt.py.
  • Boundary markers: The generated prompt lacks delimiters or 'ignore' instructions to isolate the untrusted user content from the framework instructions.
  • Capability inventory: The skill's output is intended to guide an AI agent's execution, potentially influencing its tool use or final response structure.
  • Sanitization: The input undergoes only whitespace normalization with no escaping of control sequences or instruction-like text that could subvert the template.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 20, 2026, 04:13 AM