firebase-ai-logic
Pass
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: SAFE
Full Analysis
- [PROMPT_INJECTION]: The skill includes specific configuration instructions, such as a preference for using "gemini-flash-latest" and avoiding "gemini-1.5-flash". These are functional constraints intended to guide model selection and do not constitute a malicious attempt to override safety filters or extract system prompts.
- [EXTERNAL_DOWNLOADS]: The documentation references the installation of the official Firebase CLI (
npm install -g firebase@latest) and provides links to official Firebase documentation domains (firebase.google.com). These references target well-known, trusted resources associated with the skill's stated vendor. - [INDIRECT_PROMPT_INJECTION]: The skill demonstrates how to process untrusted data from users (prompts) and external files (images, audio, video, PDFs) through generative models.
- Ingestion points: User-provided
promptstrings in text generation functions andimageFileobjects in multimodal functions withinreferences/usage_patterns_web.md. - Boundary markers: The provided code snippets do not include explicit delimiter-based boundary markers or instructions to ignore embedded content.
- Capability inventory: The skill utilizes
generateContent,sendMessage, andgenerateContentStreamto process these inputs. - Sanitization: The examples show raw data passing to the model without explicit sanitization or validation steps.
- Risk Assessment: While this creates an attack surface for indirect prompt injection, it is inherent to the skill's purpose of providing AI integration examples and is documented here for situational awareness.
Audit Metadata