ai-tools
Fail
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: HIGHPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
- Indirect Prompt Injection (HIGH): The skill is designed to ingest and process untrusted external data (media files, PDFs, and Google Search results) using the Gemini API and CLI.
- Ingestion points:
scripts/gemini_batch_process.py --files,gemini "Review [file]", and web research results. - Boundary markers: None specified in the instructions or tool definitions.
- Capability inventory: The skill allows the use of
Bash,Write, andEdit, creating a high-risk path where instructions hidden in a processed file or search result could command the agent to execute malicious shell commands or modify local source code. - Sanitization: No sanitization or filtering logic is mentioned for the external content before it is processed by the LLM.
- Command Execution & Safety Bypass (HIGH): The
geminiCLI documentation explicitly mentions the--yoloor-yflag for 'Auto-approve tool calls'. - Evidence: Using this flag in an agentic context removes critical human oversight, allowing the model to execute potentially destructive tool calls (via Bash or Edit) autonomously based on potentially poisoned input.
- Credential Handling (LOW): The skill instructs the user to
export GEMINI_API_KEY="your-key". - Evidence: While it handles sensitive credentials, it uses a placeholder and standard environment variables rather than hardcoding secrets, which is considered a low-risk best practice violation for setup instructions.
- External Downloads (LOW/INFO): The skill requires installing standard Python packages.
- Evidence:
pip install google-genai python-dotenv pillow. These are from trusted sources (PyPI) and are necessary for the skill's functionality, falling under the [TRUST-SCOPE-RULE] as LOW/INFO.
Recommendations
- AI detected serious security threats
Audit Metadata