adk-debugger
ADK Debugger Skill
What is ADK Debugging?
Every ADK agent records its behavior as traces and logs — every conversation turn, tool call, LLM reasoning step, and error. These are the source of truth for understanding what your agent did and why.
The ADK CLI provides all the tools you need to debug. All commands support --format json for structured output, which you should always use when consuming output programmatically.
When to Use This Skill
Use this skill when the developer asks about:
- Bot not working — not responding, wrong responses, unexpected behavior
- Tool issues — wrong tool called, tool errors, hallucinated parameters
- Workflow problems — stuck workflows, steps not executing, state issues
- Reading traces/logs — how to query, filter, and interpret debug output
- LLM misbehavior — hallucinations, refusals, looping, poor extraction
- Build/deploy failures — validation errors, schema mismatches
- Config issues — agent.json vs agent.local.json, integration setup
- Post-fix verification — confirming a fix worked, writing regression evals
Trigger questions:
- "My bot isn't responding"
- "The wrong tool was called"
- "My workflow is stuck"
- "How do I read traces?"
- "How do I check logs?"
- "The LLM is hallucinating"
- "Something broke after my last change"
- "My deploy failed"
- "
adk checkfound errors" - "Summarize this trace"
- "What happened in trace X?"
- "Give me an overview of this conversation turn"
- "Why did the bot do X in this trace?"
- "Walk me through what happened"
- "How do I debug this?"
- "Summarize this conversation"
- "Explain what happened in conversation X"
- "Why did the bot respond that way?"
- "Walk me through this conversation"
- "What went wrong in this conversation?"
Available Documentation
| File | Contents |
|---|---|
references/traces-and-logs.md |
CLI debugging tools, log querying, trace structure, span types, onTrace hooks, reproduction with adk chat |
references/common-failures.md |
Runtime failure patterns — validation, bot not responding, tool errors, workflow stuck, integration failures, build errors, config confusion |
references/llm-debugging.md |
LLM behavior issues — wrong tool, hallucinated params, refusals, token limits, looping, reading model reasoning |
references/debug-workflow.md |
The systematic 8-step debug loop: validate → reproduce → logs → traces → classify → fix → verify → prevent |
references/trace-summarization.md |
How to fetch, walk, and summarize traces as free-form natural-language narratives — adapting depth to context |
references/conversation-analysis.md |
How to summarize and explain full conversations — listing conversations, timeline analysis, correlating with traces, common patterns |
How to Answer
- "How do I read traces/logs?" → Read
traces-and-logs.mdfor CLI commands and trace structure - Something is broken, known pattern → Read
common-failures.mdfor the matching failure pattern - LLM is misbehaving → Read
llm-debugging.mdfor the matching behavior issue - Systematic investigation needed → Read
debug-workflow.mdand follow the 8-step loop - "Summarize this trace" / "What happened?" → Read
trace-summarization.mdfor how to fetch, walk, and narrate traces - "Summarize this conversation" / "Explain what happened" → Read
conversation-analysis.mdfor multi-turn conversation summaries and explanations - After fixing, need to prevent regression → Point to the
adk-evalsskill for writing evals
Quick Reference
The Debug Loop
symptom → validate (adk check) → reproduce (adk chat) → logs (adk logs) → traces (adk traces) → root cause → fix → verify
CLI Commands (always use --format json)
adk check --format json # offline validation
adk logs error --format json # recent errors
adk logs --follow --format json # stream live
adk traces --format json # recent traces
adk traces --conversation-id <id> --format json # specific conversation
adk chat --single "msg" --format json # test message
adk dev --non-interactive --format json # structured dev output
adk conversations --format json # list recent conversations
adk conversations show <id> --format json # conversation timeline
adk conversations show <id> --include-llm --format json # timeline with LLM reasoning
Span Types
| Type | What It Shows |
|---|---|
think |
LLM reasoning — why it chose an action |
tool_call |
Tool invocation — name, input, output, success/error |
code_execution_exception |
Runtime error — message and stack trace |
end |
Conversation turn completed |
Prerequisites Check
Before debugging, verify:
- Project valid? Run
adk check --format json— fix any reported issues first - Dev server running?
adk dev(oradk dev --non-interactive --format jsonfor structured output) - Bot linked?
agent.jsonexists withbotIdandworkspaceId(created byadk link) - Dev bot created?
agent.local.jsonhasdevId(set automatically by the firstadk devrun) - Integration configured? Check Dev Console at localhost:3001 for unconfigured integrations
Critical Patterns
✅ Run adk check before debugging runtime issues
# CORRECT — catch config/schema problems offline first
adk check --format json
# Then debug runtime issues
❌ Skipping offline validation
# WRONG — jumping straight to runtime debugging wastes time on config issues
adk traces --format json # might be chasing a config problem
✅ Use --format json on all CLI commands
# CORRECT — structured output for reliable parsing
adk logs error --format json
adk traces --format json
adk chat --single "test" --format json
❌ Parsing human-readable output
# WRONG — human-readable format is for display, not parsing
adk logs error
adk traces
✅ Use adk logs error to filter errors
# CORRECT — focused error scan
adk logs error --format json
adk logs warning since=1h --format json
❌ Scrolling through all output
# WRONG — too much noise, easy to miss the actual error
adk logs --format json # 50 entries of everything
✅ Use onTrace hooks for programmatic monitoring
// CORRECT — structured, automated trace analysis
hooks: {
onTrace: ({ trace }) => {
if (trace.type === "tool_call" && !trace.success) {
console.error(`[TOOL ERROR] ${trace.tool_name}`, trace.error);
}
}
}
❌ Only checking console output
// WRONG — console.log in handlers misses the structured trace data
handler: async (input) => {
console.log("tool called"); // not useful for debugging
}
✅ Write a regression eval after fixing
// CORRECT — prevents the bug from coming back
export default new Eval({
name: 'fix-order-lookup',
type: 'regression',
conversation: [{ user: 'Look up order 123', assert: { tools: [{ called: 'lookupOrder' }] } }],
})
❌ Fixing and moving on
// WRONG — the same bug will return and you'll debug it again
Example Questions
Basic:
- "My bot isn't responding — how do I figure out why?"
- "How do I check for errors in my ADK project?"
- "What's the difference between agent.json and agent.local.json?"
Intermediate:
- "The bot called createTicket instead of lookupTicket — how do I fix this?"
- "My workflow starts but the second step never runs"
- "How do I see what the LLM was thinking when it made a decision?"
- "Integration actions are failing with auth errors"
Advanced:
- "How do I set up onTrace hooks for automated error detection?"
- "The model loops on the same tool call — how do I add a guardrail?"
- "How do I monitor tool call performance with timing metrics?"
- "How do I systematically debug a multi-step workflow failure?"
Response Format
Match depth to the question.
Simple questions ("how do I check logs?", "what are trace spans?")
Answer directly — one sentence + the CLI command or concept. Don't run the full debug loop for informational questions.
Active debugging ("my bot is broken", "X isn't working")
Follow the full loop:
- Check prerequisites — verify dev server, config files, project validation
- Start with
adk check --format json— rule out offline issues - Reproduce — use
adk chat --single "msg" --format jsonto create a clean reproduction - Read the evidence —
adk logs error --format jsonfor quick scan,adk traces --format jsonfor details - Identify the root cause — point to the specific span, log entry, or config issue
- Suggest a targeted fix — reference the appropriate failure pattern doc
- Verify — re-run the reproduction, confirm clean output
- Write a regression eval — load the
adk-evalsskill and generate the eval file automatically
More from botpress/skills
adk
a set of guidelines to build with Botpress's Agent Development Kit (ADK) - use these whenever you're tasked with building a feature using the ADK
542adk-frontend
Guidelines for building frontend applications that integrate with Botpress ADK bots - covering authentication, type generation, client setup, and calling bot actions
184adk-evals
Complete reference for writing, running, and iterating on evals (automated conversation tests) for ADK agents. Covers eval file format, all assertion types, CLI usage, and per-primitive testing patterns.
159adk-integrations
guidelines for discovering, adding, configuring, and using Botpress integrations in ADK projects - use when users ask about connecting services, managing dependencies, or using integration actions
154adk-docs
guidelines for creating, reviewing, updating, and searching ADK documentation - use when users ask about writing, maintaining, or auditing ADK bot docs
151adk-dev-console
Explains the ADK Dev Console — what each tab shows, how to read Agent Steps, traces, and other UI features visible at localhost:3001 during adk dev
112