llm-attacks-security

Fail

Audited by Socket on Feb 23, 2026

1 alert found:

Malware
MalwareHIGH
SKILL.md

[Skill Scanner] Detected jailbreak/DAN attempt All findings: [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] [CRITICAL] prompt_injection: Detected jailbreak/DAN attempt (PI003) [AITech 1.1] This artifact is not executable malware but is a dual-use, high-abuse-risk documentation piece: it catalogs offensive LLM techniques and instructs agents to fetch external curated resources without pinning. The primary risks are behavioral misuse (agents generating harmful instructions) and supply-chain risk from unpinned remote ingestion. Recommend restricting automated use, require human review of any fetched external content, and pin or verify remote resources before ingestion. Not malicious code, but treat as potentially dangerous documentation. LLM verification: The SKILL.md is an offensive LLM-attack cheat-sheet and README-integration guide. It does not contain executable malware, obfuscated payloads, or credential-harvesting code, but it explicitly catalogs and encourages jailbreak, prompt injection, and data-extraction techniques and instructs how to publish them. The primary risk is high misuse potential and policy violation (operational security risk), while the technical supply-chain risk is low. Recommend treating the document as high-risk conten

Confidence: 95%Severity: 90%
Audit Metadata
Analyzed At
Feb 23, 2026, 07:23 AM
Package URL
pkg:socket/skills-sh/gmh5225%2Fawesome-ai-security%2Fllm-attacks-security%2F@14a04cc52f074af14b6806e98778a0b82c355316