google-gemini-api
Pass
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: SAFE
Full Analysis
- [Prompt Injection] (SAFE): No instructions attempting to override agent behavior, bypass safety filters, or extract system prompts were detected. The documentation focus is purely on API implementation and migration.
- [Data Exposure & Exfiltration] (SAFE): The templates correctly use environment variables (
process.env.GEMINI_API_KEY) and Cloudflare Worker secrets (env.GEMINI_API_KEY) for authentication. No hardcoded secrets, sensitive file paths, or unauthorized network exfiltration were found. Network calls are restricted to the official Google API endpoint (generativelanguage.googleapis.com). - [Obfuscation] (SAFE): No multi-layer encoding, zero-width characters, homoglyphs, or hidden executable commands were found in the source code or documentation.
- [Unverifiable Dependencies & Remote Code Execution] (SAFE): The skill uses standard, well-known dependencies from the
npmregistry, specifically the official@google/genaiSDK. No remote scripts are downloaded or executed at runtime. Thecheck-versions.shscript is a local utility for package verification with no network-to-shell execution patterns. - [Privilege Escalation] (SAFE): No commands for privilege escalation (e.g.,
sudo,chmod 777, or system configuration modification) were detected. - [Persistence Mechanisms] (SAFE): No attempts to establish persistence through shell profiles, cron jobs, or startup services were found.
- [Metadata Poisoning] (SAFE): The plugin metadata in
plugin.jsonand the markdown headers are accurate and descriptive of the skill's actual functionality. - [Indirect Prompt Injection] (LOW): As a chatbot template, the skill naturally has ingestion points for untrusted data (e.g., in
cloudflare-worker.tsviarequest.json()). While it has the capability to perform network operations (API calls), it does not provide downstream write access to sensitive systems. This is a standard risk for any LLM-based application template and is managed by the underlying LLM's safety filters. - [Time-Delayed / Conditional Attacks] (SAFE): No logic gating behavior based on time, date, or specific environmental triggers was detected.
- [Dynamic Execution] (SAFE): The skill demonstrates the
codeExecutiontool feature of the Gemini API. This allows the AI model to run generated Python code within a Google-managed sandbox. This is a primary intended feature of the API and does not represent a vulnerability in the skill's own code.
Audit Metadata