Writing Hookify Rules
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [INDIRECT_PROMPT_INJECTION] (HIGH): The skill documentation describes a system that ingests untrusted data (user prompts, file contents, bash commands) and uses it to trigger the injection of arbitrary markdown messages into the agent's context. This creates a high-severity vulnerability surface.
- Ingestion points: Rules process data from the
commandfield (bash events),new_text/file_pathfields (file events), anduser_promptfield (prompt events). - Boundary markers: None. The skill does not suggest any delimiters or warnings to prevent the agent from obeying instructions contained within the triggered messages.
- Capability inventory: Rules can 'warn' (inject text) or 'block' (interrupt flow). The injected text is intended to guide the agent but can be used to redirect it.
- Sanitization: None. The skill provides no guidance on sanitizing the content being matched or the messages being displayed to prevent nested injection.
- [PERSISTENCE] (HIGH): The rules are stored locally in
.claude/hookify.{rule-name}.local.mdand are 'read dynamically on next tool use.' This allows for the creation of persistent 'backdoors' or logic overrides that survive across the entire project lifecycle, effectively maintaining control over the agent's behavior. - [COMMAND_EXECUTION] (LOW): The skill suggests executing
python3 -c "import re; ..."to test regex patterns. While this is a common utility for developers, it encourages the execution of code strings via the command line, which could be exploited if an attacker provides a 'test string' containing malicious code.
Recommendations
- AI detected serious security threats
Audit Metadata