collaborating-with-gemini

Fail

Audited by Gen Agent Trust Hub on Mar 7, 2026

Risk Level: HIGHEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTIONCREDENTIALS_UNSAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill requires the installation of the @google/gemini-cli package via npm. This specific package is not a recognized official Google tool, which poses a supply chain risk if the package name is used for typosquatting or malicious purposes.
  • [COMMAND_EXECUTION]: The bridge script in scripts/gemini_bridge.py executes system commands via subprocess.Popen. On Windows, it assembles a command string for cmd.exe using a custom escaping routine (_cmd_quote), which is a fragile practice that can lead to command injection vulnerabilities if specifically crafted inputs are used.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it incorporates user-provided text directly into calls to an external AI model without sufficient protection.
  • Ingestion points: The --PROMPT parameter in gemini_bridge.py accepts arbitrary user text.
  • Boundary markers: No delimiters or protective instructions are used to separate user input from the model's logic.
  • Capability inventory: The script executes shell commands and interacts with remote APIs.
  • Sanitization: The implementation lacks content-based sanitization for the prompt data.
  • [CREDENTIALS_UNSAFE]: The skill's setup process involves gemini auth login, which stores credentials locally. These secrets are then available to any process running the gemini binary, creating a risk of credential theft if the binary itself is untrusted.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 7, 2026, 09:04 AM