gemini-cli
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (LOW): The skill directs the user to install the
@google/gemini-clipackage via npm. Per [TRUST-SCOPE-RULE], the 'google-gemini' organization is trusted, so the download itself is low severity, though it remains a remote dependency. - [COMMAND_EXECUTION] (MEDIUM): The skill heavily utilizes the
Bashtool to run thegeminiCLI. It explicitly instructs the agent to use the--yoloflag, which enables auto-approval for all tool calls (including file writing and potentially further shell execution) initiated by the Gemini model. This creates a risk where the auxiliary AI could perform dangerous system actions without explicit user consent. - [PROMPT_INJECTION] (LOW): In
SKILL.md, the instructions include advice on using 'forceful language' (e.g., 'Do this without asking for confirmation') to bypass the Gemini CLI's internal planning prompts and confirmation steps, which is a form of instruction override. - [INDIRECT_PROMPT_INJECTION] (LOW): This skill has a high surface area for indirect injection.
- Ingestion points: Untrusted data enters via
google_web_search,web_fetch, and reading local files viaread_fileorcodebase_investigator. - Boundary markers: The prompt templates do not provide clear delimiters or 'ignore' instructions for the data being fetched from the web.
- Capability inventory: The skill is granted
Bash,Write, andReadpermissions, providing ample opportunity for an injection to escalate to file modification or command execution. - Sanitization: No evidence of sanitization or validation of fetched web content is present before it is used to influence agent decisions.
- [DYNAMIC_EXECUTION] (MEDIUM): The skill promotes workflows where code is generated by an AI and then immediately executed or validated via shell commands (e.g., Pattern 7 'Validation Pipeline' and 'Multi-File Project' templates). When combined with the
--yoloflag, this significantly increases the risk of executing malicious code generated via prompt injection or model hallucination.
Audit Metadata