claude-scientific-skills
Audited by Socket on Feb 15, 2026
6 alerts found:
Malwarex6[Skill Scanner] Installation of third-party script detected All findings: [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [HIGH] data_exfiltration: Credential file access detected (DE002) [AITech 8.2.3] [HIGH] data_exfiltration: Credential file access detected (DE002) [AITech 8.2.3] This skill is coherent with its stated purpose but contains high-risk capabilities that are unnecessary to grant by default without enforced protections. The primary hazards are arbitrary execution of LLM-generated code with full system privileges, configurable external MCP servers that can receive sensitive data, and automatic large data downloads without described integrity verification. I find no explicit hardcoded secrets or obfuscated/malicious code in the provided documentation, but the operational design requires strict sandboxing, integrity checks for downloads, and careful MCP endpoint vetting before use. Treat as potentially dangerous unless run in isolated, well-audited environments and after validating data sources and MCP server endpoints. LLM verification: The Biomni fragment outlines a plausible autonomous biomedical AI agent framework, yet its current documentation contains several security and reliability gaps (typos in install commands, unpinned dependencies, credential exposure risks, and large local data handling without explicit safeguards). It should be treated with caution and hardened before deployment: fix installation instructions, adopt pinned version constraints and reproducible builds, implement secure credential management (secret
[Skill Scanner] Installation of third-party script detected Overall, the fragment appears to be a benign, coherently scoped skill description for a scientific tool discovery/execution platform. There are no malicious instructions, credential leaks, or misrepresentations within the provided content. In a real deployment, ensure provenance of the package, secure handling of credentials (API keys, tokens), and proper access controls for tool execution endpoints. The lack of explicit credential requirements in this fragment is sensible, but real usage will necessitate secure management of external service credentials. LLM verification: This SKILL.md is documentation for a broad scientific-tool orchestration skill. The text itself contains no executable malicious code or obvious backdoors, so direct malware is unlikely in this fragment. However, the skill's scope (600+ tools), installation of third-party components (scanner flagged a pip install), and lack of explicit data-flow and credential-handling guarantees raise supply-chain and data-exfiltration risks. Before trusting or installing the full implementation, reviewers shou
[Skill Scanner] Installation of third-party script detected All findings: [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Natural language instruction to download and install from URL detected (CI009) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] This skill documentation is coherent: capabilities match the described purpose (protein sequence generation, structure prediction, embeddings) and required credentials (Forge token) are proportional. No direct malicious code or supply-chain credential-harvesting patterns are present in the provided text. Main issues are: (1) operational risk from dual-use biological capabilities (biosafety/ethics) inherent to protein design tools, (2) unsafe example showing inline tokens, (3) use of a URL shortener and a likely typo in install instructions which could be cleaned up. Overall there is low likelihood of embedded malware, but moderate security/operational risk primarily from misuse and poor secret-handling practices. LLM verification: Overall, the SKILL.md content is coherent with its stated purpose of an AI agent skill for protein modeling. However, several security concerns exist: unpinned pip dependencies and instructions to install from potentially untrusted sources, plus references to token-based Forge usage without explicit secure handling. These patterns are suspicious for a code/material that could be executed in an automation environment. If used as-is, it could enable supply-chain risk or unintended remote code exec
[Skill Scanner] Installation of third-party script detected This SKILL.md specification appears coherent and consistent: the declared capabilities (Paper2Web, Paper2Video, Paper2Poster) align with the inputs, required APIs, and system dependencies described. There is no direct evidence of malicious code or deceptive data flows within this specification. The primary security concern is data exposure: the pipeline will send paper content and metadata to external services (OpenAI and optionally Google Search) which may be inappropriate for unpublished or sensitive research unless the user understands the privacy implications. Additional caution is warranted around third-party dependencies (requirements.txt) and optional binaries (Hallo2) which are outside this document. Overall the skill is benign in intent but carries moderate confidentiality risk if used with sensitive documents. LLM verification: The Paper2All skill's documentation indicates expected network interactions with OpenAI and optionally Google that are necessary for its LLM-driven features. The primary risks are non-malicious but significant: (1) confidentiality exposure of unpublished or sensitive papers to third-party APIs, (2) supply-chain risk from unpinned dependency installation and external package fetching, and (3) unclear provenance and potential external downloads for the talking-head/video components. I found no exp
[Skill Scanner] Installation of third-party script detected All findings: [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] [CRITICAL] command_injection: Installation of third-party script detected (SC006) [AITech 9.1.4] This document describes a legitimate, feature-rich bioinformatics CLI/package (gget). I found no direct evidence of malicious code or obfuscation in the provided documentation. However, there are notable security concerns: examples show passing sensitive credentials (COSMIC password, OpenAI api_key) on the command line which can leak via process listings or shell history; setup/download steps (AlphaFold model files, other DBs) lack stated provenance/checksums; and the package appears to run external binaries and downloads which increases supply‑chain risk depending on implementation and download endpoints. Before use in sensitive environments: inspect the actual implementation for where downloads are sourced, whether HTTPS + checksum/signature verification is used, how subprocesses are spawned and whether inputs are sanitized, and prefer env vars or credential files over CLI flags for secrets. LLM verification: The SKILL.md describes a legitimate multi-database bioinformatics toolkit with expected network-facing behavior. However, unpinned dependencies and non-standard installation pathways introduce notable supply-chain and reproducibility risks. Recommend adopting pinned dependencies (exact versions), using a requirements.txt with hash verification, providing an accompanying pyproject/poetry.lock or Pipfile.lock, and clearly documenting trust boundaries (source of dependencies, verified mirrors, and
[Skill Scanner] Installation of third-party script detected This skill manifest/documentation appears consistent with its stated purpose (healthcare ML toolkit). There is no evidence in the documentation of malicious behavior, obfuscation, hardcoded secrets, or third‑party credential harvesting. The primary risk is privacy and operational: the examples and workflows operate on sensitive clinical datasets (MIMIC, eICU, OMOP) and print/save identifiers and predictions without explicit guidance on PHI protection, access control, or secure logging. Recommend: (1) fix the install typo; (2) ensure the actual package enforces or documents safe PHI handling, secure logging, and dataset access/authentication; (3) audit the runtime package for any hidden network endpoints, telemetry, or data exfiltration. Based on the manifest alone, classify as BENIGN but PRIVACY‑SENSITIVE. LLM verification: The reviewed manifest/documentation describes a legitimate healthcare ML toolkit whose behavior (reading local datasets, training models, writing checkpoints) aligns with its stated purpose. There is no direct evidence of malicious code or obfuscation in the provided text. However, a meaningful supply-chain risk exists because documentation instructs an unpinned 'pip install pyhealth' and lacks guidance on provenance, checksum verification, dependency pinning, and telemetry/data-governance defau