notebooklm

Warn

Audited by Gen Agent Trust Hub on Feb 17, 2026

Risk Level: MEDIUMEXTERNAL_DOWNLOADSREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • EXTERNAL_DOWNLOADS (MEDIUM): The scripts/setup_environment.py script automatically downloads and installs the Google Chrome browser binary using the patchright package. This involves fetching executable code from non-standard external sources during initialization.\n- REMOTE_CODE_EXECUTION (MEDIUM): The run.py and setup_environment.py scripts implement a custom environment manager that performs pip install and browser binary installation at runtime without explicit user confirmation of the sources.\n- PROMPT_INJECTION (LOW): The skill uses high-urgency keywords ('CRITICAL', 'EXTREMELY IMPORTANT') in SKILL.md and scripts/ask_question.py to steer the agent's control flow and force it into a loop of follow-up questions. Additionally, it creates an indirect prompt injection surface by ingesting untrusted data from the web.\n
  • Ingestion points: scripts/ask_question.py (extracts text content from NotebookLM UI components).\n
  • Boundary markers: Absent; the retrieved text is returned directly to the agent context.\n
  • Capability inventory: Browser automation, local data storage, and subprocess execution via internal scripts.\n
  • Sanitization: Absent; the skill returns the raw inner text from the browser.\n- COMMAND_EXECUTION (LOW): The skill makes extensive use of subprocess.run to manage its isolated virtual environment and execute core logic modules.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Feb 17, 2026, 04:54 PM