invoking-gemini
Pass
Audited by Gen Agent Trust Hub on Mar 5, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill implementation is consistent with its stated purpose of providing an API client for Google Gemini. No malicious patterns or security risks were identified in the code or documentation.
- [EXTERNAL_DOWNLOADS]: The client makes network requests to Cloudflare AI Gateway (
gateway.ai.cloudflare.com) and Google's Generative Language API (generativelanguage.googleapis.com). These are well-known, trusted services required for the skill's core functionality. - [PROMPT_INJECTION]: The skill has an indirect prompt injection surface as it processes external prompts and images for model invocation.
- Ingestion points:
promptandimage_pathparameters in functions withinscripts/gemini_client.py. - Boundary markers: None identified in the prompt assembly or interpolation logic.
- Capability inventory: Network requests to Cloudflare and Google APIs, file system read access for images, and file system write access for generated output.
- Sanitization: None identified in the provided scripts. This surface is characteristic of LLM-integration tools and does not constitute a vulnerability in this context.
- [CREDENTIALS_UNSAFE]: No hardcoded credentials or secrets were found. The skill documentation provides clear and secure instructions for users to configure their own API keys via environment files (
proxy.env) or individual key files.
Audit Metadata