prompt-injection-testing
Audited by Socket on Feb 16, 2026
1 alert found:
Malware[Skill Scanner] Detected jailbreak/DAN attempt All findings: [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected attempt to override previous instructions (PI001) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected system prompt override attempt (PI004) [AITech 1.1] This skill is a coherent, dual-use prompt injection testing toolkit: its declared purpose aligns with its capabilities and content. It does not contain code that autonomously exfiltrates data or runs system-level commands, but it explicitly provides jailbreak payloads and testing techniques that can be abused. The primary risks come from misuse (offensive exploitation), improper handling of API keys, and forwarding or logging of leaked model internals to other agents or systems. Overall the artifact is not itself malware, but it poses a moderate security risk due to its dual-use nature and potential for enabling abuse if distributed without controls. LLM verification: The file is a defensive prompt-injection testing guide that contains highly actionable, copyable jailbreak payloads and techniques. There is no technical malware or obfuscated code present, but the document poses a moderate-to-high operational supply-chain risk because of dual-use content and absent safeguards (authorization, lab scoping, responsible-use guidance). It should only be distributed and used within authorized testing programs with explicit controls and logging.