brd-browser-debug
Bright Data — Browser Session Debugger
Diagnose Bright Data Scraping Browser sessions using the Browser Sessions API. Fetches live session data and performs smart triage: error diagnosis, bandwidth analysis, captcha reporting, and pattern detection across recent sessions.
Setup
Set your API key:
export BRIGHTDATA_API_KEY="your-api-key"
Get a key from Bright Data Dashboard → API Tokens.
No zone configuration needed — zone is returned as a field in session data.
Usage
List & triage recent sessions
Invoked as /brd-browser-debug with no arguments.
API reference: GET /browser_sessions
Fetching sessions
Start with a single call using limit=100 (the maximum) sorted by most recent:
GET https://api.brightdata.com/browser_sessions?limit=100&sort=timestamp&order=desc
Authorization: Bearer $BRIGHTDATA_API_KEY
Pagination: The response includes total, has_more, and next_offset. If has_more is true and the analysis requires more data (e.g. bandwidth outlier detection needs a larger sample), fetch the next page using offset=<next_offset>. Continue until you have enough data or has_more is false.
Available filters — apply when the user specifies a scope:
status=failed|finished|running— narrow to a specific session stateapi_name=<zone>— filter to a specific Bright Data zonetarget_url=<domain>— filter by target domain (e.g.ksp.co.il)start_date/end_date— ISO 8601 datetime rangesort=timestamp|duration|bandwidthwithorder=asc|desc
If the user asks about a specific zone, date range, or domain — apply the relevant filter rather than fetching all sessions and filtering client-side.
Triage steps
- Present a health summary:
totalfrom the response, counts of finished / failed / running. - Most recent session — always highlight it regardless of status (same detail level as single-session mode).
- Failed sessions — for each failure: session ID, timestamp, duration, bandwidth, then reason about the cause using the signals in the Diagnosing Failed Sessions section below.
- Pattern detection — if 3+ sessions share the same
error.code, call it a systemic issue:"3 sessions failed with
custom_headers— you are overriding a header Bright Data forbids. Removepage.setExtraHTTPHeaders()from your code." - Bandwidth outliers — group sessions by
target_urldomain. For each domain with 3+ sessions, calculate the median bandwidth. Flag any session whose bandwidth exceeds 2× the median for that domain as an outlier, and note if it was a failed session that burned unusually high bandwidth before dying. - Captcha activity — report how many sessions hit captchas and whether they were solved.
- Close with a one-line verdict: the most important finding and the most impactful fix.
Inspect a single session
Invoked as /brd-browser-debug <session_id>.
API reference: GET /browser_sessions/{session_id}
-
Call:
GET https://api.brightdata.com/browser_sessions/<session_id> Authorization: Bearer $BRIGHTDATA_API_KEYReturns 404 if the session ID is not found — tell the user and stop.
-
Present a deep-dive using the response fields:
- Status (
status):running/finished/failed - Zone (
api_name): the Bright Data zone that handled the session - Timestamp (
timestamp): ISO 8601 — show in local-friendly format - Duration (
duration): seconds (nullable) — flag if < 2 s on failure (session barely started) - Bandwidth (
bandwidth): convert bytes → MB - Navigations (
navigations): flag if 0 (nothing was loaded) - Captcha (
captcha): one ofsolved/none/detected/failed—detectedmeans a challenge appeared but was not solved;failedmeans solving was attempted but unsuccessful - Route:
target_url→end_url— note significant drift (different domain, login wall, error page) - Error (
error.code+error.message): reason about the cause using the signals in Diagnosing Failed Sessions below
- Status (
-
Close with a one-line verdict.
Auto-detect from conversation context
When a Bright Data browser issue appears in the conversation — including puppeteer stack traces, error codes, mention of brd.superproxy.io, the user describing a session failure, OR a scraper producing empty/unexpected results (e.g. "Found 0 categories", "Got 0 products", fewer items than expected):
- If a session ID is visible in the output → run single-session deep-dive on it.
- If no session ID is visible → run list & triage, filtering by the relevant target domain. Highlight the most recent session as the likely culprit.
- Cross-reference the error or unexpected behavior seen in the conversation with what the API returns. A session that finished successfully with normal bandwidth but the scraper got 0 results points to a client-side selector/extraction bug, not a proxy issue.
Features
- Smart triage: automatically groups sessions by failure pattern, not just lists them
- Dynamic bandwidth outliers: compares sessions per domain using median, flags sessions exceeding 2× the median
- Captcha reporting: shows captcha hit rate and solve rate
- Error reasoning: reads session signals holistically to infer what went wrong
- Zero config: reads API key from env var, no zone setup needed
Diagnosing Failed Sessions
Do not rely on the error code alone. Cross-reference all available session signals to reason about what went wrong:
- Duration + navigations: a session that failed in < 2 s with 0 navigations never got past the connection phase — likely a configuration or auth issue. A session that ran for minutes before failing points to a runtime problem (blocked mid-scrape, idle timeout, network drop).
- Bandwidth relative to other sessions: a failed session that consumed bandwidth similar to successful ones likely reached the target but failed during extraction. A failed session with near-zero bandwidth never loaded anything.
- Captcha field: if
captchaisdetectedbut notsolved, the session was stopped by an unsolved challenge — suggest enabling captcha solving on the zone. - target_url vs end_url: significant drift (different domain, login page, error page) means the session was redirected away from the intended target.
- error.message: use the raw message text as-is to describe what happened — do not guess or invent meaning beyond what the message says. If the cause is unclear, direct the user to Bright Data support.
More from brightdata/skills
scrape
Scrape web content as clean markdown/HTML/JSON via the Bright Data CLI (`bdata scrape`). Use when the user wants to fetch a page, extract content from a list of URLs, or crawl paginated listings. Hands off to `data-feeds` for supported platforms (Amazon, LinkedIn, TikTok, Instagram, YouTube, Reddit, etc.) and to `search` when URLs must be discovered first. Requires the Bright Data CLI; proactively guides install + login if missing.
10.3Ksearch
Search the web via the Bright Data CLI — `bdata search` for Google/Bing/Yandex SERP, `bdata discover` for intent-ranked semantic results. Use when the user wants SERP results, needs URLs to feed into scraping, or wants semantic web discovery with optional page content. Hands off to `scrape` once target URLs are chosen, and to `data-feeds` when the user wants structured data from a known platform. Requires the Bright Data CLI; proactively guides install + login if missing.
7.1Kbrightdata-cli
Guide for using the Bright Data CLI (`brightdata` / `bdata`) to scrape websites, search the web, extract structured data from 40+ platforms, manage proxy zones, and check account budget. Use this skill whenever the user wants to scrape a URL, search Google/Bing/Yandex, extract data from Amazon/LinkedIn/Instagram/TikTok/YouTube/Reddit or any other platform, check their Bright Data balance or zones, or do anything involving web data collection from the terminal. Also trigger when the user mentions brightdata, bdata, web scraping CLI, SERP API, or wants to install Bright Data skills into their coding agent.
1.6Kseo-audit
When the user wants to audit, review, or diagnose SEO issues on their site. Uses live web data via the Bright Data CLI for accurate detection of JS-injected schema, hreflang, canonicals, and live SERP-based ranking checks. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," "SEO health check," "my traffic dropped," "lost rankings," "not showing up in Google," "site isn't ranking," "Google update hit me," "page speed," "core web vitals," "crawl errors," or "indexing issues." Use this even if the user just says something vague like "my SEO is bad" or "help with SEO" — start with an audit. For building pages at scale to target keywords, see programmatic-seo. For implementing structured data, see schema-markup. For AI search optimization, see ai-seo.
1.5Kbright-data-best-practices
Build production-ready Bright Data integrations with best practices baked in. Reference documentation for developers using coding assistants (Claude Code, Cursor, etc.) to implement web scraping, search, browser automation, and structured data extraction. Covers Web Unlocker API, SERP API, Web Scraper API, and Browser API (Scraping Browser).
1.4Kbright-data-mcp
|
746