x-scraper
Warn
Audited by Gen Agent Trust Hub on Feb 16, 2026
Risk Level: MEDIUMCREDENTIALS_UNSAFEDATA_EXFILTRATIONEXTERNAL_DOWNLOADSPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [CREDENTIALS_UNSAFE] (HIGH): The skill requires the user to provide raw X.com session cookies (
auth_token,ct0,twid). These are high-value credentials that grant full account access. Thescripts/convert_cookies.pyscript specifically identifies and processes these values, printing partial tokens to the console. - [DATA_EXFILTRATION] (MEDIUM): The skill defaults to storing sensitive session cookies in
/tmp/x_cookies_pw.jsonand scraped results in/tmp/x_{username}_posts.json. Storing sensitive authentication data and scraped content in a world-readable directory like/tmpon multi-user systems exposes the user to session hijacking and data theft. - [EXTERNAL_DOWNLOADS] (LOW): The setup guide (
references/setup.md) instructs users to download and install theplaywrightlibrary andchromiumbrowser. While these are legitimate tools, they involve downloading and executing binaries from external sources. - [PROMPT_INJECTION] (MEDIUM): Indirect Prompt Injection surface (Category 8).
- Ingestion points: Untrusted post content (
textContent) is scraped fromx.cominscripts/scraper.py. - Boundary markers: None. Scraped text is returned to the agent context as raw strings without delimiters or instructions to ignore embedded commands.
- Capability inventory: The skill has network access and local file writing capabilities. If a downstream agent processes the output, malicious instructions within the scraped posts could influence the agent's behavior.
- Sanitization: None. The raw text content is extracted via
inner_text()and used without filtering. - [COMMAND_EXECUTION] (LOW): The
scripts/scraper.pyscript launches the Chromium browser with--no-sandboxand--disable-setuid-sandbox. These flags significantly degrade browser security, increasing the risk of a sandbox escape if the browser encounters malicious content on the target website.
Audit Metadata