llm-models
Audited by Socket on Mar 4, 2026
1 alert found:
MalwareThis skill documentation appears to legitimately describe using an inference CLI (infsh) to access multiple LLMs via OpenRouter and related providers. I found no embedded malicious code or hardcoded secrets in the provided text. However, the quick-start uses a curl|sh install that downloads and executes binaries from dist.inference.sh and references checksums hosted on the same domain — a classic supply-chain risk. The examples also recommend transitive installation of third-party skills via npx, which expands the trust surface. Overall the content is functional and aligned with its purpose, but the installation and transitive-install patterns raise a meaningful supply-chain security risk. Operators should manually verify checksums, review installer contents before execution, and audit any third-party skills before adding them.