collaborating-with-gemini
Fail
Audited by Gen Agent Trust Hub on Mar 7, 2026
Risk Level: HIGHEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTIONCREDENTIALS_UNSAFE
Full Analysis
- [EXTERNAL_DOWNLOADS]: The skill requires the installation of the
@google/gemini-clipackage via npm. This specific package is not a recognized official Google tool, which poses a supply chain risk if the package name is used for typosquatting or malicious purposes. - [COMMAND_EXECUTION]: The bridge script in
scripts/gemini_bridge.pyexecutes system commands viasubprocess.Popen. On Windows, it assembles a command string forcmd.exeusing a custom escaping routine (_cmd_quote), which is a fragile practice that can lead to command injection vulnerabilities if specifically crafted inputs are used. - [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it incorporates user-provided text directly into calls to an external AI model without sufficient protection.
- Ingestion points: The
--PROMPTparameter ingemini_bridge.pyaccepts arbitrary user text. - Boundary markers: No delimiters or protective instructions are used to separate user input from the model's logic.
- Capability inventory: The script executes shell commands and interacts with remote APIs.
- Sanitization: The implementation lacks content-based sanitization for the prompt data.
- [CREDENTIALS_UNSAFE]: The skill's setup process involves
gemini auth login, which stores credentials locally. These secrets are then available to any process running thegeminibinary, creating a risk of credential theft if the binary itself is untrusted.
Recommendations
- AI detected serious security threats
Audit Metadata