generate-swagger-docs
Audited by Socket on Feb 15, 2026
1 alert found:
Malware[Skill Scanner] Natural language instruction to download and install from URL detected All findings: [CRITICAL] command_injection: Natural language instruction to download and install from URL detected (CI009) [AITech 9.1.4] [CRITICAL] command_injection: URL pointing to executable file detected (CI010) [AITech 9.1.4] The skill's declared purpose (generating OpenAPI docs using an LLM-backed helper) is consistent with the capabilities described. There is no direct evidence of obfuscated malware or explicit exfiltration code in the provided text. However, two practical supply-chain/secret-management risks exist: (1) it downloads and executes a run.sh script directly from a GitHub branch (not pinned to a commit/release), which could be modified to include malicious actions if the remote repo or branch is compromised, and (2) it saves the OpenAI API key into apimesh/config.json (plaintext), creating a high chance of accidental leakage if users do not properly secure .gitignore or file permissions. Recommendation: treat as SUSPICIOUS — acceptable to use after mitigation: pin remote downloads to a verified commit or release, inspect run.sh before executing, avoid persisting API keys in plaintext (use OS keyring or per-run env vars), and ensure config.json is excluded from version control and filesystem backups if it contains secrets. LLM verification: This skill's stated purpose (automatic Swagger/OpenAPI generation) is plausible and many behaviors are consistent with that purpose. However, the installation and execution model relies on downloading and running an external script from a third-party GitHub repo with the user's OpenAI API key supplied to the subprocess environment and saved locally. That introduces a significant supply-chain and credential-exposure risk. Without auditing the remote run.sh and the apimesh tool, this skill should