gemini
Fail
Audited by Gen Agent Trust Hub on Feb 22, 2026
Risk Level: CRITICALEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [EXTERNAL_DOWNLOADS] (CRITICAL): Automated scanners identified a confirmed blacklisted malicious URL within the
requirements.mdfile. As the skill uses this file for context during AI consultation, this indicates exposure to known malicious infrastructure.\n- [COMMAND_EXECUTION] (MEDIUM): The skill relies onscripts/gemini-consult.shfor its core operations. This script is not provided for review, representing an unverified execution point that could potentially be used for arbitrary command execution or unsafe parameter handling.\n- [DATA_EXFILTRATION] (LOW): The skill's primary function is to read local codebase files, logs, and requirements and transmit them to the external Google Gemini API. This constitutes intentional data egress of potentially sensitive intellectual property.\n- [PROMPT_INJECTION] (LOW): The skill is highly vulnerable to Indirect Prompt Injection (Category 8) because it is designed to process untrusted data from both the local repository and the web.\n - Ingestion points: Local files passed via
--context-fileand external web documentation sources.\n - Boundary markers: None identified; instructions do not specify delimiters for untrusted context or warnings to ignore embedded commands.\n
- Capability inventory: Execution of local shell scripts and network access for LLM consultation.\n
- Sanitization: No evidence of input validation or content sanitization prior to LLM ingestion.
Recommendations
- AI detected serious security threats
- Contains 1 malicious URL(s) - DO NOT USE
Audit Metadata