hooks-mastery

Pass

Audited by Gen Agent Trust Hub on Feb 24, 2026

Risk Level: SAFE
Full Analysis
  • [COMMAND_EXECUTION]: The skill includes utility scripts like scripts/test-hook-io.py and scripts/generate-hook-template.sh that are designed to execute local scripts and shell commands. This functionality is the primary purpose of the skill, providing a local testing environment for developers to verify their hook logic before deployment.
  • [DATA_EXPOSURE]: The example script examples/userprompt-enricher/enricher.py includes a security feature that scans user prompts for sensitive patterns such as 'password', 'api_key', and 'token'. If detected, it blocks the prompt and provides a warning, demonstrating a proactive approach to preventing data exposure.
  • [INDIRECT_PROMPT_INJECTION]: The skill describes an inherent attack surface in the Claude Code hooks protocol. Specifically, the prompt-based Stop hook example in SKILL.md and examples/stop-evaluator/README.md interpolates $ARGUMENTS (which contains conversation history) into an LLM prompt.
  • Ingestion points: Untrusted conversation data enters the prompt via the $ARGUMENTS placeholder in SKILL.md.
  • Boundary markers: The example prompts do not currently utilize delimiters or 'ignore' instructions to isolate the interpolated context.
  • Capability inventory: The output of this hook can influence the agent's decision to continue or stop execution.
  • Sanitization: No explicit sanitization of the conversation context is performed before interpolation.
  • [EXTERNAL_DOWNLOADS]: The documentation and installation guides neutrally reference well-known services and tools such as jsonschema (via pip) and nvm, which are standard dependencies for the development workflow described.
Audit Metadata
Risk Level
SAFE
Analyzed
Feb 24, 2026, 05:12 PM