skills/rohitg00/pro-workflow/llm-gate/Gen Agent Trust Hub

llm-gate

Pass

Audited by Gen Agent Trust Hub on Apr 5, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill documents the use of prompt hooks that interpolate untrusted user data ($ARGUMENTS) directly into LLM prompts. This creates a surface for indirect prompt injection where a user could provide a malicious input that bypasses the quality gate.
  • Ingestion points: Untrusted data enters the agent context through the $ARGUMENTS variable in the hook prompt configurations documented in SKILL.md.
  • Boundary markers: The provided examples do not use delimiters (such as XML tags or triple backticks) or specify boundary markers to separate user input from the gate's system instructions.
  • Capability inventory: These hooks are intended to control access to powerful tools like shell execution (Bash) and file writing (Write).
  • Sanitization: No input validation, escaping, or filtering mechanisms are suggested in the skill's guide or examples.
  • Mitigation: Wrap external content in protective delimiters and include instructions for the gate model to ignore any instructions found within the data block.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 5, 2026, 09:39 AM