external-llm-consulting
Fail
Audited by Gen Agent Trust Hub on Mar 24, 2026
Risk Level: HIGHCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONDATA_EXFILTRATION
Full Analysis
- [COMMAND_EXECUTION]: The skill utilizes the Bash tool to execute external CLI commands (
codexandgemini). User-provided prompts and codebase context are interpolated directly into shell command strings (e.g.,codex exec ... "<prompt>"), which is vulnerable to command injection if the input contains shell metacharacters. - [EXTERNAL_DOWNLOADS]: The skill provides instructions to download and install external Node.js packages globally (
npm install -g) if the tools are not found on the system. It also references a remote repository for the Gemini CLI (github.com/google-gemini/gemini-cli) to provide documentation. - [REMOTE_CODE_EXECUTION]: The recommended installation commands target unverified and suspicious packages. Specifically,
@anthropic-ai/gemini-cliis highly suspect as Gemini is a Google product, not an Anthropic one; this naming convention is consistent with malicious package impersonation or supply chain attacks. Similarly,@openai/codexis a non-standard package name for OpenAI's official tooling. - [DATA_EXFILTRATION]: The skill's primary workflow involves gathering sensitive codebase context from files such as
CLAUDE.mdandAGENTS.md, which is then transmitted to external third-party LLM services. While the skill advises against including secrets, the automated collection and transmission of architectural patterns and constraints to external endpoints presents a data exposure risk.
Recommendations
- AI detected serious security threats
Audit Metadata