google-adk
Fail
Audited by Gen Agent Trust Hub on Mar 10, 2026
Risk Level: HIGHCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The documentation in
references/tools-reference.mdprovides an example of a calculation tool that uses the Pythoneval()function. This allows for arbitrary code execution if an attacker can control the input string, which is a common target for prompt injection attacks. - [COMMAND_EXECUTION]: In
references/authentication.md, the Workload Identity Federation (WIF) configuration example includes anexecutablecredential source that runs a local command (/usr/local/bin/azure-token-provider). - [EXTERNAL_DOWNLOADS]: The
MCPToolsetexample inreferences/tools-reference.mdusesnpx -yto download and execute themongodb-mcp-serverat runtime. While this is a well-known service, executing code downloaded at runtime from a package registry is a significant security risk. - [REMOTE_CODE_EXECUTION]: The A2A (Agent-to-Agent) protocol described in
references/a2a-protocol.mdallows agents to communicate with remote services, potentially allowing for remote code execution if the remote agent is compromised or malicious. - [DATA_EXFILTRATION]: Multiple files (e.g.,
references/python-samples.md,references/authentication.md) provide patterns for agents to access sensitive user data from BigQuery, Calendar, and Gmail. This creates a significant surface for data exfiltration if the agent's instructions are subverted. - [PROMPT_INJECTION]: While
references/safety.mdprovides mitigations for prompt injection, it explicitly documents common injection patterns and how they might bypass basic filters, highlighting the inherent vulnerability of LLM-based agents.
Recommendations
- AI detected serious security threats
Audit Metadata