google-gemini-api

Pass

Audited by Gen Agent Trust Hub on Mar 10, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill documents an attack surface for indirect prompt injection by providing patterns for processing external untrusted data through a model equipped with high-impact capabilities.
  • Ingestion points: Multiple files, including SKILL.md (Context Caching section) and templates like templates/multimodal-video-audio.ts, demonstrate how to read and provide external files (text, PDF, video, audio) directly to the Gemini model.
  • Boundary markers: The provided code examples and instructions do not include the use of explicit delimiters or 'ignore embedded instructions' warnings when interpolating external content into prompts.
  • Capability inventory: The skill features documentation for 'Code Execution' (a Python sandbox environment) and 'Function Calling', both of which could be manipulated by adversarial instructions hidden within processed data.
  • Sanitization: There is no documented guidance or example implementation of input validation or sanitization for external data before it is sent to the model's context.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 10, 2026, 03:49 AM