book-cover-design
Audited by Socket on Feb 19, 2026
1 alert found:
Malware[Skill Scanner] Pipe-to-shell or eval pattern detected All findings: [CRITICAL] command_injection: Pipe-to-shell or eval pattern detected (CI013) [AITech 9.1.4] [CRITICAL] command_injection: Instruction to copy/paste content into terminal detected (CI012) [AITech 9.1.4] This skill's content itself (markdown and examples) is not overtly malicious — it is a legitimate, practical guide for producing AI-generated book covers. However, there are supply-chain and operational risks: it instructs users to run a remote installer via `curl | sh`, centralizes all generation through a third-party CLI (inference.sh) which will receive prompts, images, and credentials, and grants broad agent permissions to run infsh with wildcard tooling. Those patterns are high-risk for credential or data exposure if the inference.sh service or downstream model endpoints are untrusted or compromised. Recommendation: treat as SUSPICIOUS for supply-chain use until the CLI install script and inference.sh's data/privacy policies are audited and the allowed-tools scope is tightened. LLM verification: The skill's content (prompts, design guidance) aligns with its stated purpose. There is no direct evidence of obfuscated or malicious code in the provided SKILL.md itself. However, the Quick Start recommends executing a remote installer via curl | sh and defers all inference and authentication to a third-party CLI/service (inference.sh / infsh) without documenting data flows, retention, or where credentials go. That distribution and opaque data handling are supply-chain and privacy risks. Recomm