llm-prompt-injection
Audited by Socket on Apr 9, 2026
2 alerts found:
Securityx2No traditional malware or executable supply-chain compromise mechanisms are present in this fragment. However, it is an explicitly actionable catalogue of LLM jailbreak and safety-evasion techniques, including strategies for system-prompt extraction and multi-step bypass escalation. If distributed as part of software tooling, it can significantly elevate the likelihood of abusive prompt attacks against LLM-powered systems.
SUSPICIOUS: The skill is internally consistent as an LLM prompt-injection red-team playbook, but it is high-risk because it equips an AI agent with offensive security techniques including system prompt extraction, indirect injection, tool abuse, and exfiltration patterns. There is little supply-chain or credential-forwarding risk because no real installer, package, or token flow is present, but the offensive capability itself makes the skill dangerous.