llm-prompt-injection

Warn

Audited by Socket on Apr 9, 2026

2 alerts found:

Securityx2
SecurityMEDIUM
JAILBREAK_PATTERNS.md

No traditional malware or executable supply-chain compromise mechanisms are present in this fragment. However, it is an explicitly actionable catalogue of LLM jailbreak and safety-evasion techniques, including strategies for system-prompt extraction and multi-step bypass escalation. If distributed as part of software tooling, it can significantly elevate the likelihood of abusive prompt attacks against LLM-powered systems.

Confidence: 80%Severity: 72%
SecurityMEDIUM
SKILL.md

SUSPICIOUS: The skill is internally consistent as an LLM prompt-injection red-team playbook, but it is high-risk because it equips an AI agent with offensive security techniques including system prompt extraction, indirect injection, tool abuse, and exfiltration patterns. There is little supply-chain or credential-forwarding risk because no real installer, package, or token flow is present, but the offensive capability itself makes the skill dangerous.

Confidence: 95%Severity: 86%
Audit Metadata
Analyzed At
Apr 9, 2026, 01:14 PM
Package URL
pkg:socket/skills-sh/yaklang%2Fhack-skills%2Fllm-prompt-injection%2F@55928e86eb7e78f23bbfcc8fd6e420d6ab95f075