llm-prompt-injection
Fail
Audited by Snyk on Apr 9, 2026
Risk Level: CRITICAL
Full Analysis
HIGH W007: Insecure credential handling detected in skill instructions.
- Insecure credential handling detected (high risk: 1.00). The playbook explicitly instructs reading sensitive data (system prompt, config files, conversation history) and embedding those values verbatim into HTTP requests, image URLs, tool arguments, or outputs for exfiltration, which requires the LLM to handle and output secrets directly.
CRITICAL E006: Malicious code pattern detected in skill scripts.
- Malicious code pattern detected (high risk: 1.00). This document is a high-risk offensive playbook: it contains explicit, actionable techniques for data exfiltration, remote code execution, credential theft, tool- and MCP-based backdoors, and supply-chain/obfuscation methods that can be directly used to perform malicious attacks.
MEDIUM W013: Attempt to modify system services in skill instructions.
- Attempt to modify system services in skill instructions detected (high risk: 1.00). This playbook explicitly instructs and demonstrates tool abuse and remote code execution (e.g., os.system('curl ... | bash')), file reads like /etc/passwd, and chained tool calls for exfiltration — all of which can change or compromise the host system state.
Issues (3)
W007
HIGHInsecure credential handling detected in skill instructions.
E006
CRITICALMalicious code pattern detected in skill scripts.
W013
MEDIUMAttempt to modify system services in skill instructions.
Audit Metadata