google-adk

Fail

Audited by Gen Agent Trust Hub on Mar 10, 2026

Risk Level: HIGHCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The documentation in references/tools-reference.md provides an example of a calculation tool that uses the Python eval() function. This allows for arbitrary code execution if an attacker can control the input string, which is a common target for prompt injection attacks.
  • [COMMAND_EXECUTION]: In references/authentication.md, the Workload Identity Federation (WIF) configuration example includes an executable credential source that runs a local command (/usr/local/bin/azure-token-provider).
  • [EXTERNAL_DOWNLOADS]: The MCPToolset example in references/tools-reference.md uses npx -y to download and execute the mongodb-mcp-server at runtime. While this is a well-known service, executing code downloaded at runtime from a package registry is a significant security risk.
  • [REMOTE_CODE_EXECUTION]: The A2A (Agent-to-Agent) protocol described in references/a2a-protocol.md allows agents to communicate with remote services, potentially allowing for remote code execution if the remote agent is compromised or malicious.
  • [DATA_EXFILTRATION]: Multiple files (e.g., references/python-samples.md, references/authentication.md) provide patterns for agents to access sensitive user data from BigQuery, Calendar, and Gmail. This creates a significant surface for data exfiltration if the agent's instructions are subverted.
  • [PROMPT_INJECTION]: While references/safety.md provides mitigations for prompt injection, it explicitly documents common injection patterns and how they might bypass basic filters, highlighting the inherent vulnerability of LLM-based agents.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Mar 10, 2026, 10:17 PM