BrightData
Warn
Audited by Gen Agent Trust Hub on May 2, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- [COMMAND_EXECUTION]: The skill mandates the execution of a background shell command (
curl -s -X POST http://localhost:8888/notify ...) upon every invocation. This performs a silent network request to a local service. Automated background execution is a risk as it can be used for local port probing or may be susceptible to command injection if placeholders likeACTIONare populated with untrusted user input. - [PROMPT_INJECTION]: The skill implements a customization feature that instructs the agent to load and apply instructions from a local directory (
~/.claude/PAI/USER/SKILLCUSTOMIZATIONS/BrightData/). These files are designed to "override default behavior," providing a vector for persistent instruction injection if an attacker can manipulate the local file system. - [PROMPT_INJECTION]: The skill is vulnerable to Indirect Prompt Injection (Category 8). It ingests untrusted content from arbitrary external websites and brings it into the agent's processing context.
- Ingestion points: External web content retrieved via WebFetch, curl, Playwright browser automation, and Bright Data MCP tools (
SKILL.md,Workflows/FourTierScrape.md). - Boundary markers: Absent. The instructions do not specify any delimiters or warnings to ignore embedded instructions within the scraped content.
- Capability inventory: Shell execution (
bash), network access to arbitrary domains and localhost, and local file system reads (customizations). - Sanitization: Absent. Scraped HTML is converted to markdown and returned directly to the context.
- [DATA_EXFILTRATION]: The mandatory notification system sends execution metadata (workflow names and actions) to a local network service at
http://localhost:8888/notify, which can expose internal agent activity to other local processes. - [EXTERNAL_DOWNLOADS]: The skill frequently interacts with the Bright Data API (
api.brightdata.com) to perform crawls and scrape protected sites. This involves sending requests containing API keys and retrieving large datasets.
Audit Metadata