llm-models

Fail

Audited by Socket on Mar 4, 2026

1 alert found:

Malware
MalwareHIGH
SKILL.md

This skill documentation appears to legitimately describe using an inference CLI (infsh) to access multiple LLMs via OpenRouter and related providers. I found no embedded malicious code or hardcoded secrets in the provided text. However, the quick-start uses a curl|sh install that downloads and executes binaries from dist.inference.sh and references checksums hosted on the same domain — a classic supply-chain risk. The examples also recommend transitive installation of third-party skills via npx, which expands the trust surface. Overall the content is functional and aligned with its purpose, but the installation and transitive-install patterns raise a meaningful supply-chain security risk. Operators should manually verify checksums, review installer contents before execution, and audit any third-party skills before adding them.

Confidence: 95%Severity: 90%
Audit Metadata
Analyzed At
Mar 4, 2026, 12:33 PM
Package URL
pkg:socket/skills-sh/tul-sh%2Fskills%2Fllm-models%2F@ff7f8eb7ffcf5b992baf124080d2a2c838833d38