browser-testing-with-devtools
Browser Testing with DevTools
Overview
Use Chrome DevTools MCP to give your agent eyes into the browser. This bridges the gap between static code analysis and live browser execution — the agent can see what the user sees, inspect the DOM, read console logs, analyze network requests, and capture performance data. Instead of guessing what's happening at runtime, verify it.
When to Use
- Building or modifying anything that renders in a browser
- Debugging UI issues (layout, styling, interaction)
- Diagnosing console errors or warnings
- Analyzing network requests and API responses
- Profiling performance (Core Web Vitals, paint timing, layout shifts)
- Verifying that a fix actually works in the browser
- Automated UI testing through the agent
When NOT to use: Backend-only changes, CLI tools, or code that doesn't run in a browser.
Setting Up Chrome DevTools MCP
Installation
# Add Chrome DevTools MCP server to your Claude Code config
# In your project's .mcp.json or Claude Code settings:
{
"mcpServers": {
"chrome-devtools": {
"command": "npx",
"args": ["@anthropic/chrome-devtools-mcp@latest"]
}
}
}
Available Tools
Chrome DevTools MCP provides these capabilities:
| Tool | What It Does | When to Use |
|---|---|---|
| Screenshot | Captures the current page state | Visual verification, before/after comparisons |
| DOM Inspection | Reads the live DOM tree | Verify component rendering, check structure |
| Console Logs | Retrieves console output (log, warn, error) | Diagnose errors, verify logging |
| Network Monitor | Captures network requests and responses | Verify API calls, check payloads |
| Performance Trace | Records performance timing data | Profile load time, identify bottlenecks |
| Element Styles | Reads computed styles for elements | Debug CSS issues, verify styling |
| Accessibility Tree | Reads the accessibility tree | Verify screen reader experience |
| JavaScript Execution | Runs JavaScript in the page context | Interactive debugging, state inspection |
The DevTools Debugging Workflow
For UI Bugs
1. REPRODUCE
└── Navigate to the page, trigger the bug
└── Take a screenshot to confirm visual state
2. INSPECT
├── Check console for errors or warnings
├── Inspect the DOM element in question
├── Read computed styles
└── Check the accessibility tree
3. DIAGNOSE
├── Compare actual DOM vs expected structure
├── Compare actual styles vs expected styles
├── Check if the right data is reaching the component
└── Identify the root cause (HTML? CSS? JS? Data?)
4. FIX
└── Implement the fix in source code
5. VERIFY
├── Reload the page
├── Take a screenshot (compare with Step 1)
├── Confirm console is clean
└── Run automated tests
For Network Issues
1. CAPTURE
└── Open network monitor, trigger the action
2. ANALYZE
├── Check request URL, method, and headers
├── Verify request payload matches expectations
├── Check response status code
├── Inspect response body
└── Check timing (is it slow? is it timing out?)
3. DIAGNOSE
├── 4xx → Client is sending wrong data or wrong URL
├── 5xx → Server error (check server logs)
├── CORS → Check origin headers and server config
├── Timeout → Check server response time / payload size
└── Missing request → Check if the code is actually sending it
4. FIX & VERIFY
└── Fix the issue, replay the action, confirm the response
For Performance Issues
1. BASELINE
└── Record a performance trace of the current behavior
2. IDENTIFY
├── Check Largest Contentful Paint (LCP)
├── Check Cumulative Layout Shift (CLS)
├── Check Interaction to Next Paint (INP)
├── Identify long tasks (> 50ms)
└── Check for unnecessary re-renders
3. FIX
└── Address the specific bottleneck
4. MEASURE
└── Record another trace, compare with baseline
Writing Test Plans for Complex UI Bugs
For complex UI issues, write a structured test plan the agent can follow in the browser:
## Test Plan: Task completion animation bug
### Setup
1. Navigate to http://localhost:3000/tasks
2. Ensure at least 3 tasks exist
### Steps
1. Click the checkbox on the first task
- Expected: Task shows strikethrough animation, moves to "completed" section
- Check: Console should have no errors
- Check: Network should show PATCH /api/tasks/:id with { status: "completed" }
2. Click undo within 3 seconds
- Expected: Task returns to active list with reverse animation
- Check: Console should have no errors
- Check: Network should show PATCH /api/tasks/:id with { status: "pending" }
3. Rapidly toggle the same task 5 times
- Expected: No visual glitches, final state is consistent
- Check: No console errors, no duplicate network requests
- Check: DOM should show exactly one instance of the task
### Verification
- [ ] All steps completed without console errors
- [ ] Network requests are correct and not duplicated
- [ ] Visual state matches expected behavior
- [ ] Accessibility: task status changes are announced to screen readers
Screenshot-Based Verification
Use screenshots for visual regression testing:
1. Take a "before" screenshot
2. Make the code change
3. Reload the page
4. Take an "after" screenshot
5. Compare: does the change look correct?
This is especially valuable for:
- CSS changes (layout, spacing, colors)
- Responsive design at different viewport sizes
- Loading states and transitions
- Empty states and error states
Console Analysis Patterns
What to Look For
ERROR level:
├── Uncaught exceptions → Bug in code
├── Failed network requests → API or CORS issue
├── React/Vue warnings → Component issues
└── Security warnings → CSP, mixed content
WARN level:
├── Deprecation warnings → Future compatibility issues
├── Performance warnings → Potential bottleneck
└── Accessibility warnings → a11y issues
LOG level:
└── Debug output → Verify application state and flow
Clean Console Standard
A production-quality page should have zero console errors and warnings. If the console isn't clean, fix the warnings before shipping.
Accessibility Verification with DevTools
1. Read the accessibility tree
└── Confirm all interactive elements have accessible names
2. Check heading hierarchy
└── h1 → h2 → h3 (no skipped levels)
3. Check focus order
└── Tab through the page, verify logical sequence
4. Check color contrast
└── Verify text meets 4.5:1 minimum ratio
5. Check dynamic content
└── Verify ARIA live regions announce changes
Common Rationalizations
| Rationalization | Reality |
|---|---|
| "It looks right in my mental model" | Runtime behavior regularly differs from what code suggests. Verify with actual browser state. |
| "Console warnings are fine" | Warnings become errors. Clean consoles catch bugs early. |
| "I'll check the browser manually later" | DevTools MCP lets the agent verify now, in the same session, automatically. |
| "Performance profiling is overkill" | A 1-second performance trace catches issues that hours of code review miss. |
| "The DOM must be correct if the tests pass" | Unit tests don't test CSS, layout, or real browser rendering. DevTools does. |
Red Flags
- Shipping UI changes without viewing them in a browser
- Console errors ignored as "known issues"
- Network failures not investigated
- Performance never measured, only assumed
- Accessibility tree never inspected
- Screenshots never compared before/after changes
Verification
After any browser-facing change:
- Page loads without console errors or warnings
- Network requests return expected status codes and data
- Visual output matches the spec (screenshot verification)
- Accessibility tree shows correct structure and labels
- Performance metrics are within acceptable ranges
- All DevTools findings are addressed before marking complete