prompt-engine

Warn

Audited by Gen Agent Trust Hub on Mar 22, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill executes a local Python script using user-provided input directly in the command line: python3 {PROMPT_ENGINE_DIR}/scripts/search_prompts.py "QUERY". If the agent does not properly sanitize the user's "QUERY" string, a malicious actor could perform command injection by including quotes and shell operators (e.g., ;, &&, or ||). Mitigation: Ensure the agent sanitizes or escapes user input before constructing shell commands, or use a method that passes arguments without a shell.
  • [PROMPT_INJECTION]: The skill exhibits an indirect prompt injection surface. Ingestion points: Data is ingested from the all_prompts.json database and user-provided prompts in the enhancement and adaptation workflows (SKILL.md). Boundary markers: No explicit delimiters or instructions to ignore embedded commands are present in the instruction logic. Capability inventory: The skill calls a Python subprocess (search_prompts.py) and returns data to the main agent context (SKILL.md). Sanitization: No sanitization or validation of the processed prompts is mentioned. This surface could allow maliciously formatted content in the database to influence agent behavior. Mitigation: Use strict delimiters (e.g., XML tags) and explicit instructions for the agent to treat processed content as data only.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Mar 22, 2026, 09:14 PM