notebooklm
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONCREDENTIALS_UNSAFEPROMPT_INJECTION
Full Analysis
- EXTERNAL_DOWNLOADS (MEDIUM): The skill automatically downloads and installs external Python packages from requirements.txt and the Google Chrome browser using patchright during the first execution of scripts/run.py or scripts/setup_environment.py.
- COMMAND_EXECUTION (MEDIUM): The run.py and setup_environment.py scripts use subprocess.run to execute environment setup commands and launch other scripts, which could be exploited if script paths or arguments are manipulated.
- CREDENTIALS_UNSAFE (MEDIUM): As described in AUTHENTICATION.md and implemented in ask_question.py, the skill stores Google session cookies in data/browser_state/state.json. These are high-value credentials providing persistent access to the user's authenticated Google session.
- PROMPT_INJECTION (LOW): The skill is vulnerable to Indirect Prompt Injection (Category 8) because it ingests data from NotebookLM's web interface. Evidence Chain: 1. Ingestion point: scripts/ask_question.py reads response text via the .to-user-container .message-text-content selector. 2. Boundary markers: Absent; responses are returned directly to the agent. 3. Capability inventory: Browser automation, file system access, and subprocess execution via run.py. 4. Sanitization: Absent; the skill uses inner_text() without filtering or escaping.
Audit Metadata