prompt-analysis

Pass

Audited by Gen Agent Trust Hub on Apr 11, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it processes historical AI conversations (human prompts and AI responses) which constitute untrusted data.\n
  • Ingestion points: Conversation data is retrieved from a local database using the git-ai prompts next command as described in SKILL.md.\n
  • Boundary markers: The skill includes defensive instructions for subagents (e.g., "IMPORTANT: All data you need is in this JSON output. Do NOT run git commands."), but these are not absolute safeguards.\n
  • Capability inventory: The environment allows Bash(git-ai:*) and subagent spawning via the Task tool, which could be abused if a subagent is compromised by malicious data.\n
  • Sanitization: There is no evidence of sanitization or filtering of the retrieved JSON content before it is analyzed by subagents.\n- [COMMAND_EXECUTION]: The skill utilizes dynamic SQL construction, presenting a risk of SQL injection against the local prompts.db file.\n
  • Evidence: The skill instructions recommend using commands like git-ai prompts exec \"UPDATE prompts SET work_type='<category>' WHERE id='<prompt_id>'\", where <category> is a value generated by an AI agent.\n
  • Risk: Malicious or poorly formatted output from an agent (e.g., containing single quotes or SQL keywords) could lead to command malformation or unauthorized database manipulation.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 11, 2026, 06:07 AM