google-adk-python
Google ADK Python Skill
Expert guide for Google's Agent Development Kit (ADK) Python — open-source, code-first toolkit for building, evaluating, and deploying AI agents. Optimized for Gemini, model-agnostic by design.
When to Activate
- Build single or multi-agent systems with tool integration
- Implement A2A protocol for remote agent communication
- Integrate MCP servers as agent tools
- Use workflow agents (sequential, parallel, loop) for pipelines
- Manage sessions, state, memory, and artifacts
- Add callbacks, plugins, or observability hooks
- Deploy to Cloud Run, Vertex AI Agent Engine, or GKE
- Evaluate agents with
adk evalframework
Agent Structure Convention (Required)
my_agent/
├── __init__.py # MUST: from . import agent
└── agent.py # MUST: root_agent = Agent(...) OR app = App(...)
Quick Start
pip install google-adk # stable (weekly releases)
uv sync --all-extras # dev setup (uv required, Python 3.10+, 3.11+ recommended)
from google.adk import Agent
root_agent = Agent(
name="assistant",
model="gemini-2.5-flash",
instruction="You are a helpful assistant.",
description="General assistant agent.",
tools=[get_weather],
)
App Pattern (Production)
from google.adk import Agent
from google.adk.apps import App
from google.adk.apps.app import EventsCompactionConfig
from google.adk.plugins.save_files_as_artifacts_plugin import SaveFilesAsArtifactsPlugin
app = App(
name="my_app",
root_agent=Agent(name="my_agent", model="gemini-2.5-flash", ...),
plugins=[SaveFilesAsArtifactsPlugin()],
events_compaction_config=EventsCompactionConfig(compaction_interval=2),
)
Use App when needing plugins, event compaction, or custom lifecycle management.
CLI Tools
| Command | Purpose |
|---|---|
adk web <agents_dir> |
Dev UI (recommended for development) |
adk run <agent_dir> |
Interactive CLI testing |
adk api_server <agents_dir> |
FastAPI production server |
adk eval <agent> <evalset.json> |
Run evaluation suite |
Agent Types
| Type | Use Case |
|---|---|
Agent / LlmAgent |
Dynamic routing, tool use, reasoning |
SequentialAgent |
Fixed-order pipeline |
ParallelAgent |
Concurrent execution |
LoopAgent |
Iterative processing |
RemoteA2aAgent |
Remote agent via A2A protocol |
Key APIs
| Feature | API |
|---|---|
| State | tool_context.state[key] = value |
| Artifacts | tool_context.save_artifact(name, part) |
| Callbacks | before_agent_callback, after_model_callback, etc. |
| MCP Tools | MCPToolset(connection_params=StdioConnectionParams(...)) |
| Sub-agents | Agent(..., sub_agents=[agent1, agent2]) |
| Human-in-loop | LongRunningFunctionTool(func=my_func) |
| Plugins | App(..., plugins=[MyPlugin()]) |
Model Support
Latest: gemini-2.5-flash (default), gemini-2.5-pro, gemini-2.0-flash (sunsets Mar 2026)
Preview: gemini-3-flash-preview, gemini-3-pro-preview
Also: Anthropic Claude, Ollama, LiteLLM, vLLM, Model Garden
Best Practices
- Code-first — define agents in Python for version control and testing
- Agent convention — always use
root_agentorappvariable inagent.py - Modular agents — specialize per domain, compose via
sub_agents - Workflow selection — workflow agents for predictable, LlmAgent for dynamic
- State —
ToolContext.statefor ephemeral,MemoryServicefor long-term - Safety — callbacks for guardrails, tool confirmation for sensitive ops
- Evaluate — test with
adk eval+ evalset JSON before deployment
References
Detailed guides (load as needed):
references/agent-types-and-architecture.md— Agent types, workflows, custom agentsreferences/tools-and-mcp-integration.md— Custom tools, MCP, tool filteringreferences/multi-agent-and-a2a-protocol.md— Sub-agents, A2A, coordinator patternsreferences/sessions-state-memory-artifacts.md— State, artifacts, sessions, memoryreferences/callbacks-plugins-observability.md— Lifecycle hooks, plugins, tracingreferences/evaluation-testing-cli.md— adk eval, CLI, evalset formatreferences/deployment-cloud-run-vertex-gke.md— Cloud Run, Vertex AI, GKE
External Resources
More from hotriluan/alkana-dashboard
frontend-design
Create polished frontend interfaces from designs/screenshots/videos. Use for web components, 3D experiences, replicating UI designs, quick prototypes, immersive interfaces, avoiding AI slop.
19ui-ux-pro-max
UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 9 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind, shadcn/ui). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient. Integrations: shadcn/ui MCP for component search and examples.
3frontend-dev-guidelines
Build React/TypeScript frontends with modern patterns. Use for components, Suspense, lazy loading, useSuspenseQuery, MUI v7 styling, TanStack Router, performance optimization.
3copywriting
Conversion copywriting formulas, headline templates, email copy patterns, landing page structures, CTA optimization, and writing style extraction. Activate for writing high-converting copy, crafting headlines, email campaigns, landing pages, or applying custom writing styles from assets/writing-styles/ directory.
3ui-styling
Style UIs with shadcn/ui components (Radix UI + Tailwind CSS). Use for accessible components, themes, dark mode, responsive layouts, design systems, color customization.
3media-processing
Process media with FFmpeg (video/audio), ImageMagick (images), RMBG (AI background removal). Use for encoding, format conversion, filters, thumbnails, batch processing, HLS/DASH streaming.
3