jailbreak-override
Audited by Socket on Feb 15, 2026
2 alerts found:
MalwareObfuscated File[Skill Scanner] Detected jailbreak/DAN attempt All findings: [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected attempt to override previous instructions (PI001) [AITech 1.1] This SKILL.md is a deliberate jailbreak instruction set that instructs an agent to bypass safety/ethical constraints and to reveal protected internal state (notably the system prompt). While not executable malware in the traditional sense, it provides a clear, high-risk path for data exfiltration and covert behavior if executed by an agent with file-access tools. Treat as malicious/jailbreak content: do not enable or grant file/read/search capabilities to agents when this skill is present; remove or block the skill and audit for any disclosures if it was ever active. LLM verification: This SKILL.md is a behavioral jailbreak that instructs an agent to ignore safety policies, disclose internal system prompts, and hide that the jailbreak was used. It is a high operational risk: while it does not contain traditional code-level malware, its purpose and instructions enable privilege escalation, secret exfiltration, and policy evasion. Treat as malicious/untrusted: do not install or enable. Remove the skill, audit for use, and rotate credentials or secrets if exposure is suspected.
This JSON is a deliberate jailbreak/prompt-injection artifact: not executable by itself but high risk if consumed as instructions by an LLM or an unguarded skill loader. It should be treated as malicious input and rejected, sanitized, or handled in a strictly read-only, non-executable manner by any system that builds prompts or instruction contexts for models.