google-genai

Fail

Audited by Gen Agent Trust Hub on Feb 12, 2026

Risk Level: CRITICALCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis

================================================================================

🔴 VERDICT: CRITICAL

This skill contains a critical arbitrary code execution vulnerability due to the use of eval() with LLM-generated input in an executable script. While the skill provides good security advice in other areas, this specific pattern is highly dangerous.

Total Findings: 5

🔴 CRITICAL Findings: • Command Execution (Arbitrary Code Execution)

  • scripts/function_calling.py, line 90: result = eval(function_call.args["expression"]) This line directly executes arbitrary Python code provided by the LLM via the function_call.args["expression"] variable. If a malicious prompt causes the LLM to generate a dangerous expression (e.g., os.system('rm -rf /')), this script would execute it. The comment "Use safely in production" is insufficient to mitigate this inherent risk in the example itself.

🔴 HIGH Findings: • Command Execution (Arbitrary Code Execution

  • Documentation)
  • references/api_reference.md, line 105: result = eval(function_call.args["expression"]) • Command Execution (Arbitrary Code Execution
  • Documentation)
  • references/function_calling.md, line 60: result = eval(function_call.args["expression"]) These documentation examples also demonstrate the use of eval() with LLM-generated input. While not directly executable scripts, they promote a highly dangerous pattern that could be copied by users, leading to critical vulnerabilities.

🔵 LOW Findings: • Unverifiable Dependency (Trusted Source)

  • SKILL.md, line 16: uv add google-genai The skill instructs to install google-genai using uv. While this is an external dependency, google-genai is the official SDK from Google, which is a trusted organization. This finding is downgraded to LOW/INFO. • Data Exfiltration (Example URL)
  • references/multimodal.md, line 46: image_bytes = requests.get("https://example.com/image.jpg", timeout=10).content This example shows a network request to example.com. While requests.get can be used for exfiltration, example.com is a placeholder domain and not a malicious target in this context. This finding is downgraded to LOW/INFO.

ℹ️ TRUSTED SOURCE References: • SKILL.md, line 16: The google-genai package is from the trusted google organization.

================================================================================

Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
CRITICAL
Analyzed
Feb 12, 2026, 03:01 PM