notebooklm
Warn
Audited by Gen Agent Trust Hub on Feb 17, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSPROMPT_INJECTIONREMOTE_CODE_EXECUTIONCREDENTIALS_UNSAFE
Full Analysis
- [EXTERNAL_DOWNLOADS] (MEDIUM): The skill performs automatic runtime installation of dependencies and binaries. Specifically, scripts/setup_environment.py and scripts/run.py use pip to install patchright and python-dotenv, and execute patchright install chrome to download and install the browser. These operations occur automatically when the skill is first initialized or when scripts are run via the wrapper.
- [PROMPT_INJECTION] (LOW): The skill is susceptible to indirect prompt injection. 1. Ingestion points: Untrusted content is retrieved from Google NotebookLM URLs via browser automation in scripts/ask_question.py. 2. Boundary markers: Absent; tool output is concatenated with a follow-up trigger. 3. Capability inventory: The agent can execute local scripts via scripts/run.py and perform further browser actions. 4. Sanitization: Absent; raw text from NotebookLM is processed by the agent. The skill uses directive language (EXTREMELY IMPORTANT) to force the agent into a follow-up loop based on external data triggers.
- [REMOTE_CODE_EXECUTION] (LOW): The scripts/run.py utility is used as a wrapper to execute other Python scripts in the local directory using subprocess.run. This pattern of dynamic subprocess execution allows the agent to run code based on its own reasoning or input, though it is constrained to the scripts directory.
- [CREDENTIALS_UNSAFE] (LOW): Google authentication cookies and browser profiles are stored locally in data/browser_state/state.json and data/browser_state/browser_profile/. These sensitive credentials are required for functionality but are stored as raw cookie data on the filesystem, representing a risk of credential exposure if the local environment is accessed by unauthorized processes.
Audit Metadata